Intriguing Properties of Modern GANs
CoRR(2024)
摘要
Modern GANs achieve remarkable performance in terms of generating realistic
and diverse samples. This has led many to believe that “GANs capture the
training data manifold”. In this work we show that this interpretation is
wrong. We empirically show that the manifold learned by modern GANs does not
fit the training distribution: specifically the manifold does not pass through
the training examples and passes closer to out-of-distribution images than to
in-distribution images. We also investigate the distribution over images
implied by the prior over the latent codes and study whether modern GANs learn
a density that approximates the training distribution. Surprisingly, we find
that the learned density is very far from the data distribution and that GANs
tend to assign higher density to out-of-distribution images. Finally, we
demonstrate that the set of images used to train modern GANs are often not part
of the typical set described by the GANs' distribution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要