MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints.

IEEE International Conference on Robotics and Automation(2022)

引用 5|浏览17
暂无评分
摘要
We present a novel self-supervised algorithmnamedMotionHintfor monocular visual odometry (VO) that takes motion constraints into account. A key aspect of ourapproach is to use an appropriate motion model that can help existing self-supervised monocular VO (SSM-VO) algorithms to overcome issues related to the local minima within their self-supervised loss functions. The motion model is expressed with a neural network named PPnet. It is trained to coarsely predict the next pose of the camera and the uncertainty of this prediction. Our self-supervised approach combines the original loss and the motion loss, which is the weighted difference between the prediction and the generated ego-motion. Taking two existing SSM-VO systems as our baseline, we evaluate our MotionHint algorithm on the standard KITTI and EuRoC benchmark. Experimental results show that our MotionHint algorithm can be easily applied to existing open-source state-of-the-art SSM-VO systems to greatly improve the performance on KITTI dataset by reducing the resulting ATE by up to 28.73%. For EuRoc dataset, our method can extract the motion model.But due to the poor performance of the baseline methods, MotionHint cannot significantly improve their results.
更多
查看译文
关键词
motionhint,constraints,self-supervised
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要