ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys
arxiv(2024)
摘要
We propose a new attention mechanism with linear complexity, ATP, that
fixates Attention on Top Principal keys, rather than
on each individual token. Particularly, ATP is driven by an important
observation that input sequences are typically low-rank, i.e., input sequences
can be represented by a few principal bases. Therefore, instead of directly
iterating over all the input tokens, ATP transforms inputs into an orthogonal
space and computes attention only on the top principal bases (keys). Owing to
the observed low-rank structure in input sequences, ATP is able to capture
semantic relationships in input sequences with a few principal keys.
Furthermore, the attention complexity is reduced from quadratic to
linear without incurring a noticeable performance drop. ATP further
reduces complexity for other linear layers with low-rank inputs, leading to
more speedup compared to prior works that solely target the attention module.
Our evaluations on various models (e.g., BERT and Llama) demonstrate that ATP
achieves comparable accuracy with much lower computation and memory complexity
than the standard attention mechanism. In particular, ATP barely loses accuracy
with only 1/2 principal keys, and only incurs around 2% accuracy drops
with 1/4 principal keys.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要