A dual-modal graph attention interaction network for person Re-identification

IET COMPUTER VISION(2023)

引用 0|浏览20
暂无评分
摘要
Person Re-identification (Re-ID) is a task of matching target pedestrians under cross-camera surveillance. Learning discriminative feature representations is the main issue for person Re-ID. A few recent methods introduce text descriptions as auxiliary information to enhance feature representations, as it offers richer semantic information and perspective consistency. However, these works usually process text and images separately, which leads to the absence of cross-modal interactions. In this article, a Dual-modal Graph Attention Interaction Network (Dual-GAIN) is proposed to integrate visual features and textual features into a heterogeneous graph to model the relationship between them, simultaneously. The proposed Dual-GAIN mainly consists of two components: a dual-stream feature extractor and a Graph Attention Interaction Network (GAIN). Specifically, the two-stream feature extractor is utilised to extract visual features and textual features respectively. Then, visual local features and textual features are treated as nodes to construct a multi-modal graph. Cosine similarity constrained attention weights are introduced in GAIN, which is designed for cross-modal interaction and feature fusion on this heterogeneous multi-modal graph. Experiments on public large-scale datasets, that is, Market-1501, CUHK03 labelled, and CUHK03 detected, demonstrate our method achieves the state-of-the-art performance.
更多
查看译文
关键词
computer vision,learning (artificial intelligence)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要