Instance-Aware Group Quantization for Vision Transformers
CVPR 2024(2024)
摘要
Post-training quantization (PTQ) is an efficient model compression technique
that quantizes a pretrained full-precision model using only a small calibration
set of unlabeled samples without retraining. PTQ methods for convolutional
neural networks (CNNs) provide quantization results comparable to
full-precision counterparts. Directly applying them to vision transformers
(ViTs), however, incurs severe performance degradation, mainly due to the
differences in architectures between CNNs and ViTs. In particular, the
distribution of activations for each channel vary drastically according to
input instances, making PTQ methods for CNNs inappropriate for ViTs. To address
this, we introduce instance-aware group quantization for ViTs (IGQ-ViT). To
this end, we propose to split the channels of activation maps into multiple
groups dynamically for each input instance, such that activations within each
group share similar statistical properties. We also extend our scheme to
quantize softmax attentions across tokens. In addition, the number of groups
for each layer is adjusted to minimize the discrepancies between predictions
from quantized and full-precision models, under a bit-operation (BOP)
constraint. We show extensive experimental results on image classification,
object detection, and instance segmentation, with various transformer
architectures, demonstrating the effectiveness of our approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要