Denoising Autoregressive Representation Learning
arxiv(2024)
摘要
In this paper, we explore a new generative approach for learning visual
representations. Our method, DARL, employs a decoder-only Transformer to
predict image patches autoregressively. We find that training with Mean Squared
Error (MSE) alone leads to strong representations. To enhance the image
generation ability, we replace the MSE loss with the diffusion objective by
using a denoising patch decoder. We show that the learned representation can be
improved by using tailored noise schedules and longer training in larger
models. Notably, the optimal schedule differs significantly from the typical
ones used in standard image diffusion models. Overall, despite its simple
architecture, DARL delivers performance remarkably close to state-of-the-art
masked prediction models under the fine-tuning protocol. This marks an
important step towards a unified model capable of both visual perception and
generation, effectively combining the strengths of autoregressive and denoising
diffusion models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要