Kernel Square-Loss Exemplar Machines For Image Retrieval

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017)(2017)

引用 12|浏览75
暂无评分
摘要
Zepeda and Perez [41] have recently demonstrated the promise of the exemplar SVM (ESVM) as a feature encoder for image retrieval. This paper extends this approach in several directions: We first show that replacing the hinge loss by the square loss in the ESVM cost function significantly reduces encoding time with negligible effect on accuracy. We call this model square-loss exemplar machine, or SLEM. We then introduce a kernelized SLEM which can be implemented efficiently through low-rank matrix decomposition, and displays improved performance. Both SLEM variants exploit the fact that the negative examples are fixed, so most of the SLEM computational complexity is relegated to an offline process independent of the positive examples. Our experiments establish the performance and computational advantages of our approach using a large array of base features and standard image retrieval datasets.
更多
查看译文
关键词
kernel square-loss exemplar machines,exemplar SVM,feature encoder,hinge loss,square loss,ESVM cost function,model square-loss exemplar machine,kernelized SLEM,low-rank matrix decomposition,displays improved performance,SLEM variants,SLEM computational complexity,standard image retrieval datasets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要