Feature separation and double causal comparison loss for visible and infrared person re-identification

Knowledge-Based Systems(2022)

引用 6|浏览28
暂无评分
摘要
Visible and infrared cross-modal person re-identification (VI-ReID) is the task of retrieving person images from visible images and infrared images. In the past, most VI-ReID algorithms focused only on learning common representations of different modes. In contrast, we extract identity-related features from different modalities and filter out identity-independent interference, and we let our network learn the domain unchanged as a more effective feature representation. In this paper, a novel end-to-end feature separation and double causal comparison loss framework for VI-ReID (FSDCC) is proposed to address cross-modality ReID tasks. We first separate the features using the feature separation module (FSM) to obtain strong identity-related and weak identity-related features. Then, double causal comparison loss is used to guide the model training. This process effectively reduces the influence of identity-irrelevant information such as occlusion and background, and finally achieves enhanced expression of identity-relevant features. Simultaneously, we combine identity loss and weighted regularization TriHard loss in a progressive joint training manner. Additionally, to enhance the CNN’s ability to extract global semantic information and better establish the connection between two pixels with a certain distance on the image, we propose CNS non-local neural network (CNS non-local), finally improving VI-ReID accuracy. Extensive experiments on two cross-modality datasets demonstrate that the proposed method outperforms current state-of-the-art by a large margin, achieving rank-1/mAP accuracy 87.18%/79.10% on the RegDB dataset, and 68.79%/65.72% on the SYSU-MM01 dataset.
更多
查看译文
关键词
Visible-infrared retrieval,Cross-modality,Feature separation,Double causal comparison
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要