Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data
arxiv(2024)
摘要
Learning from preference labels plays a crucial role in fine-tuning large
language models. There are several distinct approaches for preference
fine-tuning, including supervised learning, on-policy reinforcement learning
(RL), and contrastive learning. Different methods come with different
implementation tradeoffs and performance differences, and existing empirical
findings present different conclusions, for instance, some results show that
online RL is quite important to attain good fine-tuning results, while others
find (offline) contrastive or even purely supervised methods sufficient. This
raises a natural question: what kind of approaches are important for
fine-tuning with preference data and why? In this paper, we answer this
question by performing a rigorous analysis of a number of fine-tuning
techniques on didactic and full-scale LLM problems. Our main finding is that,
in general, approaches that use on-policy sampling or attempt to push down the
likelihood on certain responses (i.e., employ a "negative gradient") outperform
offline and maximum likelihood objectives. We conceptualize our insights and
unify methods that use on-policy sampling or negative gradient under a notion
of mode-seeking objectives for categorical distributions. Mode-seeking
objectives are able to alter probability mass on specific bins of a categorical
distribution at a fast rate compared to maximum likelihood, allowing them to
relocate masses across bins more effectively. Our analysis prescribes
actionable insights for preference fine-tuning of LLMs and informs how data
should be collected for maximal improvement.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要