A Survey on In-context Learning
arxiv(2022)
摘要
With the increasing capabilities of large language models (LLMs), in-context
learning (ICL) has emerged as a new paradigm for natural language processing
(NLP), where LLMs make predictions based on contexts augmented with a few
examples. It has been a significant trend to explore ICL to evaluate and
extrapolate the ability of LLMs. In this paper, we aim to survey and summarize
the progress and challenges of ICL. We first present a formal definition of ICL
and clarify its correlation to related studies. Then, we organize and discuss
advanced techniques, including training strategies, prompt designing
strategies, and related analysis. Additionally, we explore various ICL
application scenarios, such as data engineering and knowledge updating.
Finally, we address the challenges of ICL and suggest potential directions for
further research. We hope that our work can encourage more research on
uncovering how ICL works and improving ICL.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要