Design of visual inertial state estimator for autonomous systems via multi-sensor fusion approach

Mechatronics(2023)

引用 0|浏览1
暂无评分
摘要
The achievement of autonomous navigation in autonomous systems critically hinges on the implementation of robust localization and reliable mapping. A new visual-inertial simultaneous localization and mapping (SLAM) algorithm is proposed in this paper, which consists of a visual-inertial frontend, system backend, loop closure detection module, and initialization module. Firstly, combined the inverse combination optical flow method with image pyramid, the problem of localization failure of autonomous systems due to light sensitivity of vision sensors is addressed. To meet real-time requirements, the computation complexity of algorithm is effectively reduced by combining FAST corner points with threading building block (TBB) programming library. Secondly, based on the fourth-order Runge Kutta (RK), inertial measurement unit (IMU) pre-integration model can effectively improve the estimation accuracy of autonomous systems. Nonlinear optimization backend based on DogLeg, sliding window and marginalization methods, is adopted to reduce computation complexity during backend processing. Thirdly, to mitigate the drawback of accumulating errors leading to large pose error over long periods, a loop closure detection module is introduced, and an initialization module is added to integrate visual and inertial data. Finally, the feasibility and robustness of the system are verified through testing on the Euroc dataset and Evo precision evaluation tool.
更多
查看译文
关键词
visual inertial state estimator,autonomous systems,fusion,multi-sensor
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要