Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Havrylov Serhii
Havrylov Serhii
Cited by: 0|Bibtex|Views18|Links

Abstract:

Variational autoencoders (VAEs) are a standard framework for inducing latent variable models that have been shown effective in learning text representations as well as in text generation. The key challenge with using VAEs is the {\it posterior collapse} problem: learning tends to converge to trivial solutions where the generators ignore...More

Code:

Data:

Your rating :
0

 

Tags
Comments