Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases
CoRR(2024)
摘要
Bridging the gap between diffusion models and human preferences is crucial
for their integration into practical generative workflows. While optimizing
downstream reward models has emerged as a promising alignment strategy,
concerns arise regarding the risk of excessive optimization with learned reward
models, which potentially compromises ground-truth performance. In this work,
we confront the reward overoptimization problem in diffusion model alignment
through the lenses of both inductive and primacy biases. We first identify the
divergence of current methods from the temporal inductive bias inherent in the
multi-step denoising process of diffusion models as a potential source of
overoptimization. Then, we surprisingly discover that dormant neurons in our
critic model act as a regularization against overoptimization, while active
neurons reflect primacy bias in this setting. Motivated by these observations,
we propose Temporal Diffusion Policy Optimization with critic active neuron
Reset (TDPO-R), a policy gradient algorithm that exploits the temporal
inductive bias of intermediate timesteps, along with a novel reset strategy
that targets active neurons to counteract the primacy bias. Empirical results
demonstrate the superior efficacy of our algorithms in mitigating reward
overoptimization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要