Efficient Prompting Methods for Large Language Models: A Survey
arxiv(2024)
摘要
Prompting has become a mainstream paradigm for adapting large language models
(LLMs) to specific natural language processing tasks. While this approach opens
the door to in-context learning of LLMs, it brings the additional computational
burden of model inference and human effort of manual-designed prompts,
particularly when using lengthy and complex prompts to guide and control the
behavior of LLMs. As a result, the LLM field has seen a remarkable surge in
efficient prompting methods. In this paper, we present a comprehensive overview
of these methods. At a high level, efficient prompting methods can broadly be
categorized into two approaches: prompting with efficient computation and
prompting with efficient design. The former involves various ways of
compressing prompts, and the latter employs techniques for automatic prompt
optimization. We present the basic concepts of prompting, review the advances
for efficient prompting, and highlight future research directions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要