Robust Ensemble Person Re-Identification via Orthogonal Fusion with Occlusion Handling
CoRR(2024)
摘要
Occlusion remains one of the major challenges in person reidentification
(ReID) as a result of the diversity of poses and the variation of appearances.
Developing novel architectures to improve the robustness of occlusion-aware
person Re-ID requires new insights, especially on low-resolution edge cameras.
We propose a deep ensemble model that harnesses both CNN and Transformer
architectures to generate robust feature representations. To achieve robust
Re-ID without the need to manually label occluded regions, we propose to take
an ensemble learning-based approach derived from the analogy between
arbitrarily shaped occluded regions and robust feature representation. Using
the orthogonality principle, our developed deep CNN model makes use of masked
autoencoder (MAE) and global-local feature fusion for robust person
identification. Furthermore, we present a part occlusion-aware transformer
capable of learning feature space that is robust to occluded regions.
Experimental results are reported on several Re-ID datasets to show the
effectiveness of our developed ensemble model named orthogonal fusion with
occlusion handling (OFOH). Compared to competing methods, the proposed OFOH
approach has achieved competent rank-1 and mAP performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要