Quantization of Large Language Models with an Overdetermined Basis
arxiv(2024)
摘要
In this paper, we introduce an algorithm for data quantization based on the
principles of Kashin representation. This approach hinges on decomposing any
given vector, matrix, or tensor into two factors. The first factor maintains a
small infinity norm, while the second exhibits a similarly constrained norm
when multiplied by an orthogonal matrix. Surprisingly, the entries of factors
after decomposition are well-concentrated around several peaks, which allows us
to efficiently replace them with corresponding centroids for quantization
purposes. We study the theoretical properties of the proposed approach and
rigorously evaluate our compression algorithm in the context of next-word
prediction tasks and on a set of downstream tasks for text classification. Our
findings demonstrate that Kashin Quantization achieves competitive or superior
quality in model performance while ensuring data compression, marking a
significant advancement in the field of data quantization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要