PIVO: Probabilistic Inertial-Visual Odometry for Occlusion-Robust Navigation

2018 IEEE Winter Conference on Applications of Computer Vision (WACV)(2018)

引用 23|浏览39
暂无评分
摘要
This paper presents a novel method for visual-inertial odometry. The method is based on an information fusion framework employing low-cost IMU sensors and the monocular camera in a standard smartphone. We formulate a sequential inference scheme, where the IMU drives the dynamical model and the camera frames are used in coupling trailing sequences of augmented poses. The novelty in the model is in taking into account all the cross-terms in the updates, thus propagating the inter-connected uncertainties throughout the model. Stronger coupling between the inertial and visual data sources leads to robustness against occlusion and feature-poor environments. We demonstrate results on data collected with an iPhone and provide comparisons against the Tango device and using the EuRoC data set.
更多
查看译文
关键词
Tango device,EuRoC data set,interconnected uncertainty propagation,IMU sensors,augmented pose sequence,sequential inference scheme,standard smartphone,monocular camera,information fusion framework,visual-inertial odometry,occlusion-robust navigation,probabilistic inertial-visual odometry,PIVO,feature-poor environments,visual data sources,inertial data sources
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要