SimpleGAN Stabilizing Generative Adversarial Networks with Simple Distributions.

ICDM Workshops(2019)

引用 1|浏览13
暂无评分
摘要
Generative Adversarial Networks (GANs) are powerful generative models, but usually suffer from hard training and poor generation. Due to complex data and generation distributions in high dimensional space, it is difficult to measure the departure of two distributions, which is however vital for training successful GANs. Previous methods try to alleviate this problem by choosing reasonable divergence metrics. Unlike previous methods, in this paper, we propose a novel method called SimpleGAN to tackle this problem: transform original complex distributions to simple ones in the low dimensional space while keeping information and then measure the departure of two simple distributions. This novel method offers a new direction to tackle the stability of GANs. Specifically, starting from maximization of the mutual information between variables in the original high dimensional space and low dimensional space, we eventually derive to optimize a much simplified version, i.e. the lower bound of the mutual information. For experiments, we implement our proposed method on different baselines i.e. traditional GAN, WGAN-GP and DCGAN for CIFAR-10 dataset. Our proposed method achieves obvious improvement over these baseline models.
更多
查看译文
关键词
Generative Adversarial Networks,adversarial training,deep learning,variational inference,information theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要