Label-Noise Robust Diffusion Models
ICLR 2024(2024)
摘要
Conditional diffusion models have shown remarkable performance in various
generative tasks, but training them requires large-scale datasets that often
contain noise in conditional inputs, a.k.a. noisy labels. This noise leads to
condition mismatch and quality degradation of generated data. This paper
proposes Transition-aware weighted Denoising Score Matching (TDSM) for training
conditional diffusion models with noisy labels, which is the first study in the
line of diffusion models. The TDSM objective contains a weighted sum of score
networks, incorporating instance-wise and time-dependent label transition
probabilities. We introduce a transition-aware weight estimator, which
leverages a time-dependent noisy-label classifier distinctively customized to
the diffusion process. Through experiments across various datasets and noisy
label settings, TDSM improves the quality of generated samples aligned with
given conditions. Furthermore, our method improves generation performance even
on prevalent benchmark datasets, which implies the potential noisy labels and
their risk of generative model learning. Finally, we show the improved
performance of TDSM on top of conventional noisy label corrections, which
empirically proving its contribution as a part of label-noise robust generative
models. Our code is available at: https://github.com/byeonghu-na/tdsm.
更多查看译文
关键词
diffusion model,noisy label,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要