ODIN: Disentangled Reward Mitigates Hacking in RLHF
CoRR(2024)
摘要
In this work, we study the issue of reward hacking on the response length, a
challenge emerging in Reinforcement Learning from Human Feedback (RLHF) on
LLMs. A well-formatted, verbose but less helpful response from the LLMs can
often deceive LLMs or even human evaluators to achieve high scores. The same
issue also holds for some reward models in RL. To address the challenges in
both training and evaluation, we establish a more reliable evaluation protocol
for comparing different training configurations, which inspects the trade-off
between LLM evaluation score and response length obtained by varying training
hyperparameters. Based on this evaluation, we conduct large-scale studies,
where the results shed insights into the efficacy of hyperparameters and tricks
used in RL on mitigating length bias. We further propose to improve the reward
model by jointly training two linear heads on shared feature representations to
predict the rewards, one trained to correlate with length, and the other
trained to decorrelate with length and therefore focus more on the actual
content. We then discard the length head in RL to prevent reward hacking on
length. Experiments demonstrate that our approach almost eliminates the reward
correlation with length, and improves the obtained policy by a significant
margin.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要