Accelerating Convergence of Score-Based Diffusion Models, Provably
arxiv(2024)
摘要
Score-based diffusion models, while achieving remarkable empirical
performance, often suffer from low sampling speed, due to extensive function
evaluations needed during the sampling phase. Despite a flurry of recent
activities towards speeding up diffusion generative modeling in practice,
theoretical underpinnings for acceleration techniques remain severely limited.
In this paper, we design novel training-free algorithms to accelerate popular
deterministic (i.e., DDIM) and stochastic (i.e., DDPM) samplers. Our
accelerated deterministic sampler converges at a rate O(1/T^2) with T the
number of steps, improving upon the O(1/T) rate for the DDIM sampler; and our
accelerated stochastic sampler converges at a rate O(1/T), outperforming the
rate O(1/√(T)) for the DDPM sampler. The design of our algorithms
leverages insights from higher-order approximation, and shares similar
intuitions as popular high-order ODE solvers like the DPM-Solver-2. Our theory
accommodates ℓ_2-accurate score estimates, and does not require
log-concavity or smoothness on the target distribution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要