COPR: Continual Human Preference Learning via Optimal Policy Regularization
CoRR(2024)
摘要
Reinforcement Learning from Human Feedback (RLHF) is commonly utilized to
improve the alignment of Large Language Models (LLMs) with human preferences.
Given the evolving nature of human preferences, continual alignment becomes
more crucial and practical in comparison to traditional static alignment.
Nevertheless, making RLHF compatible with Continual Learning (CL) is
challenging due to its complex process. Meanwhile, directly learning new human
preferences may lead to Catastrophic Forgetting (CF) of historical preferences,
resulting in helpless or harmful outputs. To overcome these challenges, we
propose the Continual Optimal Policy Regularization (COPR) method, which draws
inspiration from the optimal policy theory. COPR utilizes a sampling
distribution as a demonstration and regularization constraints for CL. It
adopts the Lagrangian Duality (LD) method to dynamically regularize the current
policy based on the historically optimal policy, which prevents CF and avoids
over-emphasizing unbalanced objectives. We also provide formal proof for the
learnability of COPR. The experimental results show that COPR outperforms
strong CL baselines on our proposed benchmark, in terms of reward-based, GPT-4
evaluations and human assessment. Furthermore, we validate the robustness of
COPR under various CL settings, including different backbones, replay memory
sizes, and learning orders.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要