Black-Box Prompt Optimization: Aligning Large Language Models without Model Training
arxiv(2023)
摘要
Large language models (LLMs) have shown impressive success in various
applications. However, these models are often not well aligned with human
intents, which calls for additional treatments on them; that is, the alignment
problem. To make LLMs better follow user instructions, existing alignment
methods primarily focus on further training them. However, the extra training
of LLMs is usually expensive in terms of GPU computing; even worse, some LLMs
are not accessible for user-demanded training, such as GPTs. In this work, we
take a different perspective – Black-Box Prompt Optimization (BPO) – to
perform alignments. The idea is to optimize user prompts to suit LLMs' input
understanding, so as to best realize users' intents without updating LLMs'
parameters. BPO leverages human preferences to optimize prompts, thus making it
superior to LLM (e.g., ChatGPT) as a prompt engineer. Moreover, BPO is
model-agnostic, and the empirical results demonstrate that the BPO-aligned
ChatGPT yields a 22
10
aligned by PPO and DPO, and it also brings additional performance gains when
combining BPO with PPO or DPO. Code and datasets are released at
https://github.com/thu-coai/BPO.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要