Data-freeWeight Compress and Denoise for Large Language Models
CoRR(2024)
摘要
Large Language Models (LLMs) are reshaping the research landscape in
artificial intelligence, particularly as model parameters scale up
significantly, unlocking remarkable capabilities across various domains.
Nevertheless, the scalability of model parameters faces constraints due to
limitations in GPU memory and computational speed. To address these
constraints, various weight compression methods have emerged, such as Pruning
and Quantization. Given the low-rank nature of weight matrices in language
models, the reduction of weights through matrix decomposition undoubtedly holds
significant potential and promise. In this paper, drawing upon the intrinsic
structure of LLMs, we propose a novel approach termed Data-free Joint Rank-k
Approximation for compressing the parameter matrices. Significantly, our method
is characterized by without necessitating additional involvement of any corpus,
while simultaneously preserving orthogonality in conjunction with pruning and
quantization methods. We achieve a model pruning of 80
retaining 93.43
Additionally, we explore the fundamental properties of the weight matrix of
LLMs undergone Rank-k Approximation and conduct comprehensive experiments to
elucidate our hypothesis.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要