ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
CoRR(2023)
摘要
This study examines 4-bit quantization methods like GPTQ in large language
models (LLMs), highlighting GPTQ's overfitting and limited enhancement in
Zero-Shot tasks. While prior works merely focusing on zero-shot measurement, we
extend task scope to more generative categories such as code generation and
abstractive summarization, in which we found that INT4 quantization can
significantly underperform. However, simply shifting to higher precision
formats like FP6 has been particularly challenging, thus overlooked, due to
poor performance caused by the lack of sophisticated integration and system
acceleration strategies on current AI hardware. Our results show that FP6, even
with a coarse-grain quantization scheme, performs robustly across various
algorithms and tasks, demonstrating its superiority in accuracy and
versatility. Notably, with the FP6 quantization, \codestar-15B model performs
comparably to its FP16 counterpart in code generation, and for smaller models
like the 406M it closely matches their baselines in summarization. Neither can
be achieved by INT4. To better accommodate various AI hardware and achieve the
best system performance, we propose a novel 4+2 design for FP6 to achieve
similar latency to the state-of-the-art INT4 fine-grain quantization. With our
design, FP6 can become a promising solution to the current 4-bit quantization
methods used in LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要