Identity Feature Disentanglement for Visible-Infrared Person Re-Identification

ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS(2023)

引用 0|浏览9
暂无评分
摘要
Visible-infrared person re-identification (VI-ReID) task aims to retrieve persons from different spectrum cameras (i.e., visible and infrared images). The biggest challenge of VI-ReID is the huge cross-modal discrepancy caused by different imaging mechanisms. Many VI-ReID methods have been proposed by embedding different modal person images into a shared feature space to narrow the cross-modal discrepancy. However, these methods ignore the purification of identity features, which results in identity features containing different modal information and failing to align well. In this article, an identity feature disentanglement method is proposed to disentangle the identity features from identity-irrelevant information, such as pose and modality. Specifically, images of different modalities are first processed to extract shared features that reduce the cross-modal discrepancy preliminarily. Then the extracted feature of each image is disentangled into a latent identity variable and an identity-irrelevant variable. In order to enforce the latent identity variable to contain as much identity information as possible and as little identity-irrelevant information, an ID-discriminative loss and an ID-swapping reconstruction process are additionally designed. Extensive quantitative and qualitative experiments on two popular public VI-ReID datasets, RegDB and SYSU-MM01, demonstrate the efficacy and superiority of the proposed method.
更多
查看译文
关键词
identity,person,visible-infrared,re-identification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要