Zero-Shot Text-to-Image Generation

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139(2021)

引用 3542|浏览43319
暂无评分
摘要
Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.
更多
查看译文
关键词
generation,zero-shot,text-to-image
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要