Domain Adaptive Faster R-CNN for Object Detection in the Wild

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 1364|浏览201
暂无评分
摘要
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
更多
查看译文
关键词
domain classifier,domain-invariant region proposal network,robust object detection,domain shift scenarios,domain adaptive Faster R-CNN,test data,identical distribution,distribution mismatch,cross-domain robustness,image-level shift,image style,instance-level shift,object appearance,recent state-of-the-art Faster R-CNN model,domain adaptation components,image level,domain discrepancy,adversarial training manner
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要