Refining Video-Based Person Re-Identification: An Integrated Framework with Facial and Body Cues.


Cited 0|Views0
No score
In Person Re-Identification (Re-ID), the use of facial cues has often been overlooked due to the focus on low-quality image datasets in past research. However, these cues are essential biometric markers, particularly valuable in video person re-identification scenarios where abundant facial information is available. This paper introduces the Dual-Cue Graph Network (DCGN), a graph convolutional network-based method for re-ranking that integrates facial and body cues. Our approach begins with a facial feature fusion module that prioritizes face quality to improve the extraction of facial features from videos. Unlike traditional Re-ID networks, our method focuses on facial cues for person retrieval, producing preliminary candidate results. We then implement a confidence-weighted fusion module to combine body and facial cues and re-rank these initial results, thereby enhancing the overall person retrieval process. Our experiments on real-world video datasets confirm the effectiveness of this method, demonstrating that facial cues are a critical source of information in video-based scenarios and significantly boost the performance of video person re-identification.
Translated text
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined