Panacea: Pareto Alignment via Preference Adaptation for LLMs
CoRR(2024)
摘要
Current methods for large language model alignment typically use scalar human
preference labels. However, this convention tends to oversimplify the
multi-dimensional and heterogeneous nature of human preferences, leading to
reduced expressivity and even misalignment. This paper presents Panacea, an
innovative approach that reframes alignment as a multi-dimensional preference
optimization problem. Panacea trains a single model capable of adapting online
and Pareto-optimally to diverse sets of preferences without the need for
further tuning. A major challenge here is using a low-dimensional preference
vector to guide the model's behavior, despite it being governed by an
overwhelmingly large number of parameters. To address this, Panacea is designed
to use singular value decomposition (SVD)-based low-rank adaptation, which
allows the preference vector to be simply injected online as singular values.
Theoretically, we prove that Panacea recovers the entire Pareto front with
common loss aggregation methods under mild conditions. Moreover, our
experiments demonstrate, for the first time, the feasibility of aligning a
single LLM to represent a spectrum of human preferences through various
optimization methods. Our work marks a step forward in effectively and
efficiently aligning models to diverse and intricate human preferences in a
controllable and Pareto-optimal manner.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要