Monocular Depth Prediction through Continuous 3D Loss

IROS(2020)

引用 6|浏览48
暂无评分
摘要
This paper reports a new continuous 3D loss function for learning depth from monocular images. The dense depth prediction from a monocular image is supervised using sparse LIDAR points, which enables us to leverage available open source datasets with camera-LIDAR sensor suites during training. Currently, accurate and affordable range sensor is not readily available. Stereo cameras and LIDARs measure depth either inaccurately or sparsely/costly. In contrast to the current point-to-point loss evaluation approach, the proposed 3D loss treats point clouds as continuous objects; therefore, it compensates for the lack of dense ground truth depth due to LIDAR's sparsity measurements. We applied the proposed loss in three state-of-the-art monocular depth prediction approaches DORN, BTS, and Monodepth2. Experimental evaluation shows that the proposed loss improves the depth prediction accuracy and produces point-clouds with more consistent 3D geometric structures compared with all tested baselines, implying the benefit of the proposed loss on general depth prediction networks. A video demo of this work is available at https://youtu.be/5HL8BjSAY4Y.
更多
查看译文
关键词
monocular image,dense depth prediction,sparse LIDAR points,leverage available open source datasets,camera-LIDAR sensor suites,accurate range sensor,affordable range sensor,stereo cameras,LIDARs measure depth,current point-to-point loss evaluation approach,point clouds,continuous objects,dense ground truth depth,LIDAR's sparsity measurements,state-of-the-art monocular depth prediction,depth prediction accuracy,point-clouds,consistent 3D geometric structures,general depth prediction networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要