Identifying and Analyzing Task-Encoding Tokens in Large Language Models
CoRR(2024)
摘要
In-context learning (ICL) has become an effective solution for few-shot
learning in natural language processing. However, our understanding of ICL's
working mechanisms is limited, specifically regarding how models learn to
perform tasks from ICL demonstrations. For example, unexpectedly large changes
in performance can arise from small changes in the prompt, leaving prompt
design a largely empirical endeavour. In this paper, we investigate this
problem by identifying and analyzing task-encoding tokens on whose
representations the task performance depends. Using experiments that ablate the
representations of different token types, we find that template and stopword
tokens are the most prone to be task-encoding. In addition, we demonstrate
experimentally that lexical meaning, repetition, and text formatting are the
main distinguishing characteristics of these tokens. Our work sheds light on
how large language models (LLMs) learn to perform a task from demonstrations,
deepens our understanding of the varied roles different types of tokens play in
LLMs, and provides insights for avoiding instability from improperly utilizing
task-encoding tokens.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要