Understanding disentangling in $\beta$-VAE
arXiv: Machine Learning(2018)
摘要
We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. Taking a rate-distortion theory perspective, we show the circumstances under which representations aligned with the underlying generative factors of variation of data emerge when optimising the modified ELBO bound in $beta$-VAE, as training progresses. From these insights, we propose a modification to the training regime of $beta$-VAE, that progressively increases the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in $beta$-VAE, without the previous trade-off in reconstruction accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络