You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation
CoRR(2024)
摘要
In this paper, we introduce YONOS-SR, a novel stable diffusion-based approach
for image super-resolution that yields state-of-the-art results using only a
single DDIM step. We propose a novel scale distillation approach to train our
SR model. Instead of directly training our SR model on the scale factor of
interest, we start by training a teacher model on a smaller magnification
scale, thereby making the SR problem simpler for the teacher. We then train a
student model for a higher magnification scale, using the predictions of the
teacher as a target during the training. This process is repeated iteratively
until we reach the target scale factor of the final model. The rationale behind
our scale distillation is that the teacher aids the student diffusion model
training by i) providing a target adapted to the current noise level rather
than using the same target coming from ground truth data for all noise levels
and ii) providing an accurate target as the teacher has a simpler task to
solve. We empirically show that the distilled model significantly outperforms
the model trained for high scales directly, specifically with few steps during
inference. Having a strong diffusion model that requires only one step allows
us to freeze the U-Net and fine-tune the decoder on top of it. We show that the
combination of spatially distilled U-Net and fine-tuned decoder outperforms
state-of-the-art methods requiring 200 steps with only one single step.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要