mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments

IEEE Robotics and Automation Letters(2023)

引用 3|浏览18
暂无评分
摘要
We propose mVIL-Fusion, a three-level multisensor fusion system that is able to achieve robust state estimation and globally consistent mapping in perceptually degraded environments. First, LiDAR depth-assisted visual-inertial odometry (VIO) with LiDAR odometry (LO) synchronous prediction and distortion correction functions is proposed as the frontend of our system. Second, a novel double-sliding-window-based optimization of midend joints of LiDAR scan-to-scan translation constraints (VIO status detection function) and scan-to-map rotation constraints (local mapping function) is used to enhance the accuracy and robustness of the state estimation. In the backend, loop closures of local-map-based keyframes are identified with altitude verification, and the global map is generated by incremental smoothing of a pose-only factor graph with altitude prior. The performance of our system is verified on both a public dataset and several self-collected sequences in challenging environments. To benefit the robotics community, our implementation is available at https://github.com/Stan994265/mVIL-Fusion .
更多
查看译文
关键词
Sensor fusion,SLAM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要