A density estimation perspective on learning from pairwise human preferences
CoRR(2023)
摘要
Learning from human feedback (LHF) – and in particular learning from
pairwise preferences – has recently become a crucial ingredient in training
large language models (LLMs), and has been the subject of much research. Most
recent works frame it as a reinforcement learning problem, where a reward
function is learned from pairwise preference data and the LLM is treated as a
policy which is adapted to maximize the rewards, often under additional
regularization constraints. We propose an alternative interpretation which
centers on the generative process for pairwise preferences and treats LHF as a
density estimation problem. We provide theoretical and empirical results
showing that for a family of generative processes defined via preference
behavior distribution equations, training a reward function on pairwise
preferences effectively models an annotator's implicit preference distribution.
Finally, we discuss and present findings on "annotator misspecification" –
failure cases where wrong modeling assumptions are made about annotator
behavior, resulting in poorly-adapted models – suggesting that approaches that
learn from pairwise human preferences could have trouble learning from a
population of annotators with diverse viewpoints.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要