A Mutually Attentive Co-Training Framework for Semi-Supervised Recognition

IEEE Transactions on Multimedia(2021)

引用 13|浏览280
暂无评分
摘要
Self-training plays an important role in practical recognition applications where sufficient clean labels are unavailable. Existing methods focus on generating reliable pseudo labels to retrain a model, while ignoring the importance of improving model reliability to those inevitably mislabeled data. In this paper, we propose a novel Mutually Attentive Co-training Framework (MACF) that can effectively alleviate the negative impacts of incorrect labels on model retraining by exploring deep model disagreements. Specifically, MACF trains two symmetrical sub-networks that have the same input and are connected by several attention modules at different layers. Each attention module analyzes the inferred features from two sub-networks for the same input and feedback attention maps for them to indicate noisy gradients. This is realized by exploring the back-propagation process of incorrect labels at different layers to design attention modules. By multi-layer interception, the noisy gradients caused by incorrect labels can be effectively reduced for both sub-networks, leading to robust training to potential incorrect labels. In addition, a hierarchical distillation strategy is developed to improve the pseudo labels by aggregating the predictions from multi-models and data transformations. The experiments on six general benchmarks, including classification and biomedical segmentation, demonstrate that MACF is much robust to noisy labels than previous methods.
更多
查看译文
关键词
Self-training,mutual attention,noisy labels,recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要