Improving the Stability of Diffusion Models for Content Consistent Super-Resolution
CoRR(2023)
摘要
The generative priors of pre-trained latent diffusion models have
demonstrated great potential to enhance the perceptual quality of image
super-resolution (SR) results. Unfortunately, the existing diffusion
prior-based SR methods encounter a common problem, i.e., they tend to generate
rather different outputs for the same low-resolution image with different noise
samples. Such stochasticity is desired for text-to-image generation tasks but
problematic for SR tasks, where the image contents are expected to be well
preserved. To improve the stability of diffusion prior-based SR, we propose to
employ the diffusion models to refine image structures, while employing the
generative adversarial training to enhance image fine details. Specifically, we
propose a non-uniform timestep learning strategy to train a compact diffusion
network, which has high efficiency and stability to reproduce the image main
structures, and finetune the pre-trained decoder of variational auto-encoder
(VAE) by adversarial training for detail enhancement. Extensive experiments
show that our proposed method, namely content consistent super-resolution
(CCSR), can significantly reduce the stochasticity of diffusion prior-based SR,
improving the content consistency of SR outputs and speeding up the image
generation process. Codes and models can be found at
https://github.com/csslc/CCSR.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要