Training Nonlinear Transformers for Efficient In-Context Learning: A Theoretical Learning and Generalization Analysis
CoRR(2024)
摘要
Transformer-based large language models have displayed impressive in-context
learning capabilities, where a pre-trained model can handle new tasks without
fine-tuning by simply augmenting the query with some input-output examples from
that task. Despite the empirical success, the mechanics of how to train a
Transformer to achieve ICL and the corresponding ICL capacity is mostly elusive
due to the technical challenges of analyzing the nonconvex training problems
resulting from the nonlinear self-attention and nonlinear activation in
Transformers. To the best of our knowledge, this paper provides the first
theoretical analysis of the training dynamics of Transformers with nonlinear
self-attention and nonlinear MLP, together with the ICL generalization
capability of the resulting model. Focusing on a group of binary classification
tasks, we train Transformers using data from a subset of these tasks and
quantify the impact of various factors on the ICL generalization performance on
the remaining unseen tasks with and without data distribution shifts. We also
analyze how different components in the learned Transformers contribute to the
ICL performance. Furthermore, we provide the first theoretical analysis of how
model pruning affects the ICL performance and prove that proper magnitude-based
pruning can have a minimal impact on ICL while reducing inference costs. These
theoretical findings are justified through numerical experiments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要