Information Flow Routes: Automatically Interpreting Language Models at Scale
arxiv(2024)
摘要
Information flows by routes inside the network via mechanisms implemented in
the model. These routes can be represented as graphs where nodes correspond to
token representations and edges to operations inside the network. We
automatically build these graphs in a top-down manner, for each prediction
leaving only the most important nodes and edges. In contrast to the existing
workflows relying on activation patching, we do this through attribution: this
allows us to efficiently uncover existing circuits with just a single forward
pass. Additionally, the applicability of our method is far beyond patching: we
do not need a human to carefully design prediction templates, and we can
extract information flow routes for any prediction (not just the ones among the
allowed templates). As a result, we can talk about model behavior in general,
for specific types of predictions, or different domains. We experiment with
Llama 2 and show that the role of some attention heads is overall important,
e.g. previous token heads and subword merging heads. Next, we find similarities
in Llama 2 behavior when handling tokens of the same part of speech. Finally,
we show that some model components can be specialized on domains such as coding
or multilingual texts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要