Noise Dimension of GAN: An Image Compression Perspective
arxiv(2024)
摘要
Generative adversial network (GAN) is a type of generative model that maps a
high-dimensional noise to samples in target distribution. However, the
dimension of noise required in GAN is not well understood. Previous approaches
view GAN as a mapping from a continuous distribution to another continous
distribution. In this paper, we propose to view GAN as a discrete sampler
instead. From this perspective, we build a connection between the minimum noise
required and the bits to losslessly compress the images. Furthermore, to
understand the behaviour of GAN when noise dimension is limited, we propose
divergence-entropy trade-off. This trade-off depicts the best divergence we can
achieve when noise is limited. And as rate distortion trade-off, it can be
numerically solved when source distribution is known. Finally, we verifies our
theory with experiments on image generation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要