Exploring the Potential of Flexible 8-bit Format: Design and Algorithm

Zhuoyi Zhang,Yunchen Zhang, Gonglei Shi, Yu Shen,Xiuying Wei,Ruihao Gong, Xiaoxu Xia,Qi Zhang, Lewei Lu,Xianglong Liu

CoRR(2023)

引用 0|浏览30
暂无评分
摘要
Neural network quantization is widely used to reduce model inference complexity in real-world deployments. However, traditional integer quantization suffers from accuracy degradation when adapting to various dynamic ranges. Recent research has focused on a new 8-bit format, FP8, with hardware support for both training and inference of neural networks but lacks guidance for hardware design. In this paper, we analyze the benefits of using FP8 quantization and provide a comprehensive comparison of FP8 with INT quantization. Then we propose a flexible mixed-precision quantization framework that supports various number systems, enabling optimal selection of the most appropriate quantization format for different neural network architectures. Experimental results demonstrate that our proposed framework achieves competitive performance compared to full precision on various tasks, including image classification, object detection, segmentation, and natural language understanding. Our work furnishes critical insights into the tangible benefits and feasibility of employing FP8 quantization, paving the way for heightened neural network efficiency in tangible scenarios. Our code is available in the supplementary material.
更多
查看译文
关键词
algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要