Global-Local Graph Convolutional Network for cross-modality person re-identification

Neurocomputing(2021)

引用 13|浏览14
暂无评分
摘要
Visible-thermal person re-identification (VT-ReID) is an important task for retrieving pedestrian between visible and thermal modality. It makes up for the drawbacks of single modality person re-identification in night surveillance applications. Most of the existing methods extract the features of different images/parts independently which ignore the potential relationship between them. In this paper, we propose a novel Global-Local Graph Convolutional Network (GLGCN) to learn discriminative feature representation by modeling the relation through graph convolutional network. The local graph module builds the potential relation of different body parts within each modality to extract discriminative part-level features. The global module constructs the contextual relation of same identity across two modalities to reduce the modality discrepancy. By training the two modules jointly, the robustness of the model can be further improved. The experiment results on the SYSU-MM01 and RegDB datasets demonstrate that our model outperforms the state-of-the-art methods.
更多
查看译文
关键词
Cross-modality person re-identification,Visible-thermal,Local relation,Global relation,Graph Convolutional Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要