Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
CVPR 2024(2023)
摘要
Using reinforcement learning with human feedback (RLHF) has shown significant
promise in fine-tuning diffusion models. Previous methods start by training a
reward model that aligns with human preferences, then leverage RL techniques to
fine-tune the underlying models. However, crafting an efficient reward model
demands extensive datasets, optimal architecture, and manual hyperparameter
tuning, making the process both time and cost-intensive. The direct preference
optimization (DPO) method, effective in fine-tuning large language models,
eliminates the necessity for a reward model. However, the extensive GPU memory
requirement of the diffusion model's denoising process hinders the direct
application of the DPO method. To address this issue, we introduce the Direct
Preference for Denoising Diffusion Policy Optimization (D3PO) method to
directly fine-tune diffusion models. The theoretical analysis demonstrates that
although D3PO omits training a reward model, it effectively functions as the
optimal reward model trained using human feedback data to guide the learning
process. This approach requires no training of a reward model, proving to be
more direct, cost-effective, and minimizing computational overhead. In
experiments, our method uses the relative scale of objectives as a proxy for
human preference, delivering comparable results to methods using ground-truth
rewards. Moreover, D3PO demonstrates the ability to reduce image distortion
rates and generate safer images, overcoming challenges lacking robust reward
models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要