Deep Coattention-Based Comparator for Relative Representation Learning in Person Re-Identification

IEEE Transactions on Neural Networks and Learning Systems(2021)

引用 67|浏览302
暂无评分
摘要
Person re-identification (re-ID) favors discriminative representations over unseen shots to recognize identities in disjoint camera views. Effective methods are developed via pair-wise similarity learning to detect a fixed set of region features, which can be mapped to compute the similarity value. However, relevant parts of each image are detected independently without referring to the correlation on the other image. Also, region-based methods spatially position local features for their aligned similarities. In this article, we introduce the deep coattention-based comparator (DCC) to fuse codependent representations of paired images so as to correlate the best relevant parts and produce their relative representations accordingly. The proposed approach mimics the human foveation to detect the distinct regions concurrently across images and alternatively attends to fuse them into the similarity learning. Our comparator is capable of learning representations relative to a test shot and well-suited to reidentifying pedestrians in surveillance. We perform extensive experiments to provide the insights and demonstrate the state of the arts achieved by our method in benchmark data sets: 1.2 and 2.5 points gain in mean average precision (mAP) on DukeMTMC-reID and Market-1501, respectively.
更多
查看译文
关键词
Algorithms,Artificial Intelligence,Attention,Automated Facial Recognition,Benchmarking,Biometric Identification,Databases, Factual,Deep Learning,Humans,Image Processing, Computer-Assisted,Neural Networks, Computer,Reproducibility of Results,Software
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要