Lidar-Visual-Inertial Odometry Using Point and Line Features

2022 4th International Conference on Robotics and Computer Vision (ICRCV)(2022)

引用 0|浏览1
暂无评分
摘要
We present a robust real-time Lidar-Visual-Inertial Odometry method that introduces visual line features. Compared with the traditional multi-sensor fusion scheme based on point features, our approach introduces visual line features for pose estimation, utilizes more environmental features, and provides additional constraints on the scene structure. In our method, we first use an improved LSD algorithm for line feature extraction to achieve its real-time requirement, then extract the depth information of the point-line features by LIDAR. The extracted depth information serves as an a priori constraint for point-line bundle adjustment, and we finally use the computed pose estimation as an a priori estimation for the laser odometry to calculate the ultimate pose estimation, thus realizing a laser vision inertial odometry method using point-line features. This method can improve the accuracy of attitude estimation to some extent. Our evaluation of the M2DGR dataset shows that our method achieves higher accuracy than most open source frameworks.
更多
查看译文
关键词
Lidar-Visual-Inertial odometry,point-line features,depth information,virtual stereo camera model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要