Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 689|浏览69
暂无评分
摘要
Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner. Recent approaches to single view depth estimation explore the possibility of learning without full supervision via minimizing photometric error. In this paper, we explore the use of stereo sequences for learning depth and visual odometry. The use of stereo sequences enables the use of both spatial (between left-right pairs) and temporal (forward backward) photometric warp error, and constrains the scene depth and camera motion to be in a common, real-world scale. At test time our framework is able to estimate single view depth and two-view odometry from a monocular sequence. We also show how we can improve on a standard photometric warp loss by considering a warp of deep features. We show through extensive experiments that: (i) jointly training for single view depth and visual odometry improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry. Our method outperforms existing learning based methods on the KITTI driving dataset in both tasks. The source code is available at https://github.com/Huangying-Zhan/Depth-VO-Feat.
更多
查看译文
关键词
KITTI driving dataset,view depth estimation,camera motion,photometric warp loss,learning based methods,monocular depth estimation,unsupervised learning,deep feature-based warping loss,depth prediction,standard photometric warp loss,two-view odometry,scene depth,stereo sequences,visual odometry
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要