Optimizing Language Models for Human Preferences is a Causal Inference Problem
CoRR(2024)
摘要
As large language models (LLMs) see greater use in academic and commercial
settings, there is increasing interest in methods that allow language models to
generate texts aligned with human preferences. In this paper, we present an
initial exploration of language model optimization for human preferences from
direct outcome datasets, where each sample consists of a text and an associated
numerical outcome measuring the reader's response. We first propose that
language model optimization should be viewed as a causal problem to ensure that
the model correctly learns the relationship between the text and the outcome.
We formalize this causal language optimization problem, and we develop a
method–causal preference optimization (CPO)–that solves an unbiased surrogate
objective for the problem. We further extend CPO with doubly robust CPO
(DR-CPO), which reduces the variance of the surrogate objective while retaining
provably strong guarantees on bias. Finally, we empirically demonstrate the
effectiveness of (DR-)CPO in optimizing state-of-the-art LLMs for human
preferences on direct outcome data, and we validate the robustness of DR-CPO
under difficult confounding conditions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要