PLPF-VSLAM: An indoor visual SLAM with adaptive fusion of point-line-plane features

JOURNAL OF FIELD ROBOTICS(2024)

引用 0|浏览0
暂无评分
摘要
Simultaneous localization and mapping (SLAM) is required in many areas and especially visual-based SLAM (VSLAM) due to the low cost and strong scene recognition capabilities conventional VSLAM relies primarily on features of scenarios, such as point features, which can make mapping challenging in scenarios with sparse texture. For instance, in environments with limited (low-even non-) textures, such as certain indoors, conventional VSLAM may fail due to a lack of sufficient features. To address this issue, this paper proposes a VSLAM system called visual SLAM that can adaptively fuse point-line-plane features (PLPF-VSLAM). As the name implies, it can adaptively employ different fusion strategies on the PLPF for tracking and mapping. In particular, in rich-textured scenes, it utilizes point features, while in non-/low-textured scenarios, it automatically selects the fusion of point, line, and/or plane features. PLPF-VSLAM is evaluated on two RGB-D benchmarks, namely the TUM data sets and the ICL_NUIM data sets. The results demonstrate the superiority of PLPF-VSLAM compared to other commonly used VSLAM systems. When compared to ORB-SLAM2, PLPFVSLAM achieves an improvement in accuracy of approximately 11.29%. The processing speed of PLPF-VSLAM outperforms PL(P)-VSLAM by approximately 21.57%.
更多
查看译文
关键词
mapping,non-/low-textured scenarios,tracking,visual SLAM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要