DMA: Dual Modality-Aware Alignment for Visible-Infrared Person Re-Identification

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY(2024)

引用 0|浏览1
暂无评分
摘要
Visible-infrared person re-identification (VI-ReID) aims to identify the same person across visible and infrared images. Its main challenge is how to extract modality-irrelevant person identity information. To alleviate cross-modality discrepancies, existing methods typically follow two paradigms: 1) Transform visible images into gray-scale color space and map them into the infrared domain. 2) Stack infrared images into RGB color space and map them into the visible domain. However, limited by different optical properties of visible and infrared waves, such mapping commonly leads to information asymmetry. Although some efforts prevent such discrepancies by data-level alignment, they typically meanwhile introduce misleading information and bring extra divergence. Therefore, existing methods fail on effectively eliminating the modality discrepancies. In this paper, we first analyze the essential factors to the generation of modality discrepancies. Secondly, we propose a novel Dual Modality-aware Alignment (DMA) model for VI-ReID, which can preserve discriminative identity information and suppress the misleading information within a uniform scheme. Particularly, based on the intrinsic optical properties of both modalities, a Dual Modality Transfer (DMT) module is proposed to perform compensation for the information asymmetry in HSV color space, thereby effectively alleviating cross-modality discrepancies and better preserving discriminative identity features. Further, an Intra-local Alignment (IA) module is proposed to suppress the misleading information, where a fine-grained local consistency objective function is designed to achieve more compact intra-class representations. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our method and competitive performance with state-of-the-art methods. The source code of this paper is available at https://github.com/PKU-ICST-MIPL/DMA_TIFS2023.
更多
查看译文
关键词
Visible-infrared person re-identification,cross-modality discrepancies,dual modality transfer,intra-local alignment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要