Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
CoRR(2023)
摘要
Large Language Models (LLMs), renowned for their remarkable performance
across diverse domains, present a challenge when it comes to practical
deployment due to their colossal model size. In response to this challenge,
efforts have been directed toward the application of traditional network
pruning techniques to LLMs, uncovering a massive number of parameters that can
be pruned in one-shot without hurting performance. Prevailing LLM pruning
strategies have consistently adhered to the practice of uniformly pruning all
layers at equivalent sparsity, resulting in robust performance. However, this
observation stands in contrast to the prevailing trends observed in the field
of vision models, where non-uniform layerwise sparsity typically yields
stronger results. To understand the underlying reasons for this disparity, we
conduct a comprehensive study and discover a strong correlation with the
emergence of activation outliers in LLMs. Inspired by this finding, we
introduce a novel LLM pruning methodology that incorporates a tailored set of
non-uniform layerwise sparsity ratios, termed as Outlier Weighed Layerwise
sparsity (OWL). The sparsity ratio of OWL is proportional to the outlier ratio
observed within each layer, facilitating a more effective alignment between
layerwise weight sparsity and outlier ratios. Our empirical evaluation,
conducted across the LLaMA-V1 family and OPT, spanning various benchmarks,
demonstrates the distinct advantages offered by OWL over previous methods. For
instance, OWL exhibits a remarkable performance gain, surpassing the
state-of-the-art Wanda and SparseGPT by 61.22 and 6.80 perplexity at a high
sparsity level of 70
speed-up in the DeepSparse inference engine. Codes are available at
https://github.com/luuyin/OWL.
更多查看译文
关键词
pruning llms,high sparsity,layerwise sparsity,owl
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要