Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?

Image and Vision Computing(2017)

引用 30|浏览21
暂无评分
摘要
Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC.
更多
查看译文
关键词
Visual odometry,Monocular visual odometry,SLAM,Pose prediction,Joint epipolar tracking,Ground plane estimation,Driver assistance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要