Adaptive multi-task learning for cross domain and modal person re-identification

Neurocomputing(2022)

引用 0|浏览10
暂无评分
摘要
Person re-identification (re-ID) aims at matching a person-of-interest across various non-overlap cameras with distinguished visual appearance variances. Pre-existing research methods mainly employ deep neural models to train large-scale person re-ID datasets, achieving good performance. However, these methods are primarily deployed only on visual data, which can be easily influenced by the environment variances (e.g., viewpoints, poses, and illuminations). In this paper, we propose an adaptive multi-task learning (MTL) scheme for cross domain and modal person re-ID. It can effectively utilize the visual and language information from multiple datasets for improving learning performance. Comprehensive experiments are also conducted on the widely-used person re-ID datasets, i.e., Market-1501 and DukeMTMC-reID, validating the effectiveness of the proposed method. It can model the domain difference and the relationship between the vision and language modalities and achieve state-of-the-art performance. The source code of our proposed method will be available at https://github.com/emdata-ailab/Multitask_Learning_ReID.
更多
查看译文
关键词
Multi-task learning,Deep metric learning,Transfer learning,Cross modal Fusion,Person re-identification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要