Robust Prompt Optimization for Large Language Models Against Distribution Shifts
CoRR(2023)
摘要
Large Language Model (LLM) has demonstrated significant ability in various
Natural Language Processing tasks. However, their effectiveness is highly
dependent on the phrasing of the task prompt, leading to research on automatic
prompt optimization using labeled task data. We reveal that these prompt
optimization techniques are vulnerable to distribution shifts such as
subpopulation shifts, which are common for LLMs in real-world scenarios such as
customer reviews analysis. In this light, we propose a new problem of robust
prompt optimization for LLMs against distribution shifts, which requires the
prompt optimized over the labeled source group can simultaneously generalize to
an unlabeled target group. To solve this problem, we propose Generalized Prompt
Optimization framework, which incorporates the unlabeled data from the target
group into prompt optimization. Extensive experimental results demonstrate the
effectiveness of the proposed framework with significant performance
improvement on the target group and comparable performance on the source group.
更多查看译文
关键词
large language models,distribution shifts,optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要