Improving Extreme Low-Bit Quantization With Soft Threshold

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 3|浏览26
暂无评分
摘要
Deep neural networks executing with low precision at inference time can gain acceleration and compression advantages over their high-precision counterparts, but need to overcome the challenge of accuracy degeneration as the bit-width decreases. This work focuses on under 4-bit quantization that has a significant accuracy degeneration. We start with ternarization, a balance between efficiency and accuracy that quantizes both weights and activations into ternary values. We find that the hard threshold $\Delta $ introduced in previous ternary networks for determining quantization intervals and the suboptimal solution of $\Delta $ limit the performance of the ternary model. To alleviate it, we present Soft Threshold Ternary Networks (STTN), which enables the model to automatically determine ternarized values instead of depending on a hard threshold. Based on it, we further generalize the idea of soft threshold from ternarization to arbitrary bit-width, named Soft Threshold Quantized Networks (STQN). We observe that previous quantization relies on the rounding-to-nearest function, constraining the quantization solution space and leading to a significant accuracy degradation, especially in low-bit ( $\leq3$ -bits) quantization. Instead of relying on the traditional rounding-to-nearest function, STQN is able to determine quantization intervals by itself adaptively. Accuracy experiments on image classification, object detection and instance segmentation, as well as efficiency experiments on field-programmable gate array (FPGA) demonstrate that the proposed framework can achieve a prominent tradeoff between accuracy and efficiency. Code is available at: https://github.com/WeixiangXu/STTN .
更多
查看译文
关键词
Convolutional neural network,network compression,low-bit quantization,ternary quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要