High Precision and Robust Vehicle Localization Algorithm with Visual-LiDAR-IMU Fusion

IEEE Transactions on Vehicular Technology(2024)

引用 0|浏览0
暂无评分
摘要
Simultaneous localization and mapping (SLAM) has been indispensable for autonomous driving vehicles. Since the visual images are vulnerable to light interference and the light detection and ranging (LiDAR) heavily depends on geometric features of the surrounding scene, only relying on a camera or LiDAR show limitations in challenging environment. This paper proposes a Visual-LiDAR-IMU fusion method for high precision and robust vehicle localization. In the front end, the LiDAR point cloud is used to obtain the depth information of visual features with the synchronized IMU measurements are input into the pose estimation module in a loose-coupled manner. In the back end, two critical strategies are proposed to reduce the computation amount of the algorithm. Where the balanced selection strategy is based on keyframe and sliding window algorithms, and the classification optimization strategy is based on feature points and pose estimation assistance. In addition, an improved loop detection algorithm based on Iterative Closest Point (ICP) is proposed to reduce large-scale drift. Experimental results on the real-world scenes show that the average positioning error of our algorithm is 1.10 m, 0.91m, 1.04m in x, y, z-direction, the average rotation error is 1.03deg, 0.81deg, 0.70deg for roll, pitch, yaw, and the average resource utilization rate is 32.04% (CPU) and 13.18% (memory), the average consumption time is 24.87 ms. Compared with ORB-SLAM3, LVIO, LVI-SAM, R3LIVE and Fast-LIVO algorithms, the proposed algorithm has a better performance on both accuracy and robustness with best real-time performance.
更多
查看译文
关键词
Intelligent transportation system (ITS),autonomous driving vehicles,localization,multi-sensors fusion,SLAM,visual
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要