Return of Unconditional Generation: A Self-supervised Representation Generation Method
arxiv(2023)
摘要
Unconditional generation – the problem of modeling data distribution without
relying on human-annotated labels – is a long-standing and fundamental
challenge in generative models, creating a potential of learning from
large-scale unlabeled data. In the literature, the generation quality of an
unconditional method has been much worse than that of its conditional
counterpart. This gap can be attributed to the lack of semantic information
provided by labels. In this work, we show that one can close this gap by
generating semantic representations in the representation space produced by a
self-supervised encoder. These representations can be used to condition the
image generator. This framework, called Representation-Conditioned Generation
(RCG), provides an effective solution to the unconditional generation problem
without using labels. Through comprehensive experiments, we observe that RCG
significantly improves unconditional generation quality: e.g., it achieves a
new state-of-the-art FID of 2.15 on ImageNet 256x256, largely reducing the
previous best of 5.91 by a relative 64
in the same tier as the leading class-conditional ones. We hope these
encouraging observations will attract the community's attention to the
fundamental problem of unconditional generation. Code is available at
https://github.com/LTH14/rcg.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要