Distilling a Powerful Student Model via Online Knowledge Distillation

IEEE transactions on neural networks and learning systems(2023)

引用 22|浏览50
暂无评分
摘要
Existing online knowledge distillation approaches either adopt the student with the best performance or construct an ensemble model for better holistic performance. However, the former strategy ignores other students’ information, while the latter increases the computational complexity during deployment. In this article, we propose a novel method for online knowledge distillation, termed feature fusion and self-distillation (FFSD), which comprises two key components: FFSD, toward solving the above problems in a unified framework. Different from previous works, where all students are treated equally, the proposed FFSD splits them into a leader student set and a common student set. Then, the feature fusion module converts the concatenation of feature maps from all common students into a fused feature map. The fused representation is used to assist the learning of the leader student. To enable the leader student to absorb more diverse information, we design an enhancement strategy to increase the diversity among students. Besides, a self-distillation module is adopted to convert the feature map of deeper layers into a shallower one. Then, the shallower layers are encouraged to mimic the transformed feature maps of the deeper layers, which helps the students to generalize better. After training, we simply adopt the leader student, which achieves superior performance, over the common students, without increasing the storage or inference cost. Extensive experiments on CIFAR-100 and ImageNet demonstrate the superiority of our FFSD over existing works. The code is available at https://github.com/SJLeo/FFSD .
更多
查看译文
关键词
Training,Computational modeling,Knowledge engineering,Informatics,Optimization,Message passing,Memory management,Feature fusion,knowledge distillation,online distillation,self-distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要