Diffusion Posterior Sampling is Computationally Intractable

CoRR(2024)

引用 0|浏览1
暂无评分
摘要
Diffusion models are a remarkably effective way of learning and sampling from a distribution p(x). In posterior sampling, one is also given a measurement model p(y | x) and a measurement y, and would like to sample from p(x | y). Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction, so a number of recent works have given algorithms to heuristically approximate it; but none are known to converge to the correct distribution in polynomial time. In this paper we show that posterior sampling is computationally intractable: under the most basic assumption in cryptography – that one-way functions exist – there are instances for which every algorithm takes superpolynomial time, even though unconditional sampling is provably fast. We also show that the exponential-time rejection sampling algorithm is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要