Similarity Knowledge Distillation with Calibrated Mask

Qi Wang,Wenxin Yu, Lu Che, Chang Liu,Zhiqiang Zhang,Jun Gong, Peng Chen

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览0
暂无评分
摘要
In this paper, we propose a novel and efficient method for knowledge distillation, which is structurally simple and requires negligible computation overhead. Our method includes three modules. The first module is the calibrated mask, which avoids the teacher model’s incorrect representation to disturb the student model’s training; the second module and the third module improve the performance of the student model by the similarity of the sample and the process, respectively. The student model attains better performance in qualitative and quantitative evaluation through the judicious amalgamation of these three modules. Our method is experimented with through rigorous validation of canonical datasets, including CIFAR-100 and TinyImageNet. The experimental corroboration conclusively attests to the better performance of our method, soaring above the extant most state-of-the-art on both subjective and objective dimensions.
更多
查看译文
关键词
Computer Vision,Deep Learning,Classification,Model Compression,Knowledge Distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要