Preventing Reward Hacking with Occupancy Measure Regularization
arxiv(2024)
摘要
Reward hacking occurs when an agent performs very well with respect to a
"proxy" reward function (which may be hand-specified or learned), but poorly
with respect to the unknown true reward. Since ensuring good alignment between
the proxy and true reward is extremely difficult, one approach to prevent
reward hacking is optimizing the proxy conservatively. Prior work has
particularly focused on enforcing the learned policy to behave similarly to a
"safe" policy by penalizing the KL divergence between their action
distributions (AD). However, AD regularization doesn't always work well since a
small change in action distribution at a single state can lead to potentially
calamitous outcomes, while large changes might not be indicative of any
dangerous activity. Our insight is that when reward hacking, the agent visits
drastically different states from those reached by the safe policy, causing
large deviations in state occupancy measure (OM). Thus, we propose regularizing
based on the OM divergence between policies instead of AD divergence to prevent
reward hacking. We theoretically establish that OM regularization can more
effectively avoid large drops in true reward. Then, we empirically demonstrate
in a variety of realistic environments that OM divergence is superior to AD
divergence for preventing reward hacking by regularizing towards a safe policy.
Furthermore, we show that occupancy measure divergence can also regularize
learned policies away from reward hacking behavior. Our code and data are
available at https://github.com/cassidylaidlaw/orpo
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要