Task-Specific Loss for Robust Instance Segmentation With Noisy Class Labels

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 9|浏览76
暂无评分
摘要
Deep learning methods have achieved significant progress in the presence of correctly annotated datasets in instance segmentation. However, object classes in large-scale datasets are sometimes ambiguous, which easily causes confusion. Besides, limited experience and knowledge of annotators can lead to mislabeled object semantic classes. To solve this issue, a novel method is proposed in this paper, which considers different roles of noisy class labels in different sub-tasks. Our method is based on two basic observations: firstly, the foreground-background annotation of a sample is correct even though its class label is noisy. Secondly, symmetric loss benefits the model robustness to noisy labels but harms the learning of hard samples, while cross entropy loss is the opposite. Based on the two basic observations, in the foreground-background sub-task, cross entropy loss is used to fully exploit correct gradient guidance. In the foreground-instance sub-task, symmetric loss is used to prevent incorrect gradient guidance provided by noisy class labels. Furthermore, we apply contrastive self-supervised loss to update features of all foreground, to compensate for insufficient guidance provided by partially correct labels especially in the highly noisy setting. Extensive experiments conducted with three popular datasets (i.e., Pascal VOC, Cityscapes and COCO) have demonstrated the effectiveness of our method in a wide range of noisy class label scenarios.
更多
查看译文
关键词
Noisy class labels,instance segmentation,foreground-background sub-task,foreground-instance sub-task,self-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要