The Variational InfoMax AutoEncoder

2020 International Joint Conference on Neural Networks (IJCNN)(2020)

引用 1|浏览0
暂无评分
摘要
The Variational AutoEncoder (VAE) learns simultaneously an inference and a generative model, but only one of these models can be learned at optimum, this behaviour is associated to the ELBO learning objective, that is optimised by a non-informative generator. In order to solve such an issue, we provide a learning objective, learning a maximal informative generator while maintaining bounded the network capacity: the Variational InfoMax (VIM). The contribution of the VIM derivation is twofold: an objective learning both an optimal inference and generative model and the explicit definition of the network capacity, an estimation of the network robustness.
更多
查看译文
关键词
generative adversarial network,machine learning,ELBO learning,variational autoencoder learning,variational infomax autoencoder,generative model,inference model,maximal informative generator,noninformative generator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要