Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-Tailed Classification

European Conference on Computer Vision(2020)

引用 260|浏览334
暂无评分
摘要
In real-world scenarios, data tends to exhibit a long-tailed distribution, which increases the difficulty of training deep networks. In this paper, we propose a novel self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME). Our method is inspired by the observation that networks trained on less imbalanced subsets of the distribution often yield better performances than their jointly-trained counterparts. We refer to these models as ‘Experts’, and the proposed LFME framework aggregates the knowledge from multiple ‘Experts’ to learn a unified student model. Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selection, so that the knowledge is adaptively transferred to the ‘Student’. We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods. We also show that our method can be easily plugged into state-of-the-art long-tailed classification algorithms for further improvements.
更多
查看译文
关键词
knowledge distillation,multiple experts,learning,classification,self-paced,long-tailed
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要