Circle Loss: A Unified Perspective Of Pair Similarity Optimization

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2020)

引用 890|浏览377
暂无评分
摘要
This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity s(p) and minimize the between-class similarity s(n). We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed sp and s p into similarity pairs and seek to reduce (s(n) - s(p)). Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, i.e., learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing (s(n) - s(p)). Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.
更多
查看译文
关键词
person re-identification,face recognition,fine-grained image retrieval datasets,pair similarity optimization,unified perspective,pair-wise labels,elemental deep feature learning paradigms,less-optimized similarity scores,single similarity score,similarity pairs,softmax cross-entropy loss,triplet loss,loss functions,between-class similarity,within-class similarity,Circle loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要