Backward Lens: Projecting Language Model Gradients into the Vocabulary Space
CoRR(2024)
摘要
Understanding how Transformer-based Language Models (LMs) learn and recall
information is a key goal of the deep learning community. Recent
interpretability methods project weights and hidden states obtained from the
forward pass to the models' vocabularies, helping to uncover how information
flows within LMs. In this work, we extend this methodology to LMs' backward
pass and gradients. We first prove that a gradient matrix can be cast as a
low-rank linear combination of its forward and backward passes' inputs. We then
develop methods to project these gradients into vocabulary items and explore
the mechanics of how new information is stored in the LMs' neurons.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要