NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
arxiv(2024)
摘要
Large language model inference on Central Processing Units (CPU) is
challenging due to the vast quantities of expensive Multiply-Add (MAD) matrix
operations in the attention computations. In this paper, we argue that there is
a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers,
which allow for ultra-low-latency lookups in batch. We leverage this unique
capability of CPUs to propose NoMAD-Attention, an efficient attention algorithm
that replaces MAD operations with in-register lookups. Through hardware-aware
algorithmic designs, NoMAD-Attention achieves the computation of attention
scores using repeated fast accesses to SIMD registers despite their highly
limited sizes. Moreover, NoMAD-Attention works with pre-trained attention-based
LLMs without model finetuning. Empirical evaluations demonstrate that
NoMAD-Attention maintains the quality of the original LLMs well, and speeds up
the 4-bit quantized LLaMA-7B-based model by up to 2× at 16k context
length. Our results are reproducible at
https://github.com/tonyzhang617/nomad-dist.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要