WARM: On the Benefits of Weight Averaged Reward Models
CoRR(2024)
摘要
Aligning large language models (LLMs) with human preferences through
reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit
failures in the reward model (RM) to achieve seemingly high rewards without
meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL
process and inconsistencies in human preferences. As a solution, we propose
Weight Averaged Reward Models (WARM), first fine-tuning multiple RMs, then
averaging them in the weight space. This strategy follows the observation that
fine-tuned weights remain linearly mode connected when sharing the same
pre-training. By averaging weights, WARM improves efficiency compared to the
traditional ensembling of predictions, while improving reliability under
distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of-N and RL methods, shows that
WARM improves the overall quality and alignment of LLM predictions; for
example, a policy RL fine-tuned with WARM has a 79.4
RL fine-tuned with a single RM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要