Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Havrylov Serhii

Abstract:

Variational autoencoders (VAEs) are a standard framework for inducing latent variable models that have been shown effective in learning text representations as well as in text generation. The key challenge with using VAEs is the {\it posterior collapse} problem: learning tends to converge to trivial solutions where the generators ignore...More

Code:

Data: