Spectral Filters, Dark Signals, and Attention Sinks
CoRR(2024)
摘要
Projecting intermediate representations onto the vocabulary is an
increasingly popular interpretation tool for transformer-based LLMs, also known
as the logit lens. We propose a quantitative extension to this approach and
define spectral filters on intermediate representations based on partitioning
the singular vectors of the vocabulary embedding and unembedding matrices into
bands. We find that the signals exchanged in the tail end of the spectrum are
responsible for attention sinking (Xiao et al. 2023), of which we provide an
explanation. We find that the loss of pretrained models can be kept low despite
suppressing sizable parts of the embedding spectrum in a layer-dependent way,
as long as attention sinking is preserved. Finally, we discover that the
representation of tokens that draw attention from many tokens have large
projections on the tail end of the spectrum.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要