Boosting Unsupervised Domain Adaptation for 3D Object Detection in Point Clouds with 2D Image Semantic Information

2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)(2023)

引用 0|浏览1
暂无评分
摘要
Both 3D and RGB-D data are applicable for 3D object detection, yet there exists significant geometric bias between these two data representations owing to the different reconstruction procedures. The geometric bias between these two data types induces performance drops for cross-domain testing; hence we propose an unsupervised domain adaption (UDA) framework to leverage annotated data in different data formats for indoor 3D object detection. Our method inverse-projects the pixel-wise semantic labels predicted from 2D images onto point clouds for object detection and UDA in both directions. For the more challenging UDA from 3D to RGB-D data, we propose some additional strategies to reduce the domain gap by aligning the extracted features from two domains with adversarial training. Our method reduces the domain gap between two types of data and leverages the semantic label information predicted from 2D RGB images to boost the accuracy of the 3D object detection model. In our experiments, we validate our approach with ScanNet and SUN RGB-D as the source and the target datasets in both directions of domain adaptation. The proposed method improves the mAP@0.25 by 6.4% and 10.3% for the two directions of cross-dataset testing compared with that without applying any domain adaptation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要