Teacher-student complementary sample contrastive distillation

Zhiqiang Bao,Zhenhua Huang,Jianping Gou,Lan Du, Kang Liu, Jingtao Zhou,Yunwen Chen

NEURAL NETWORKS(2024)

引用 0|浏览7
暂无评分
摘要
Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the "dark knowledge"of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teacher model, and overconfident predictions in the student model may degrade its performance. In this work, we propose a novel framework, Teacher-Student Complementary Sample Contrastive Distillation (TSCSCD), that alleviate these challenges. TSCSCD consists of three key components: Contrastive Sample Hardness (CSH), Supervision Signal Correction (SSC), and Student Self-Learning (SSL). Specifically, CSH evaluates the teacher's supervision for each sample by comparing the predictions of two compact models, one distilled from the teacher and the other trained from scratch. SSC corrects weak supervision according to CSH, while SSL employs integrated learning among multi-classifiers to regularize overconfident predictions. Extensive experiments on four real-world datasets demonstrate that TSCSCD outperforms recent state-of-the-art knowledge distillation techniques.
更多
查看译文
关键词
Knowledge distillation,Transfer learning,Model regularization,Sample hardness,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要