A Deep Learning-Based Visual Map Generation for Mobile Robot Navigation

Carlos A. Garcia-Pintos, Noe G. Aldana-Murillo,Emmanuel Ovalle-Magallanes, Edgar Martinez

ENG(2023)

引用 0|浏览0
暂无评分
摘要
Visual map-based robot navigation is a strategy that only uses the robot vision system, involving four fundamental stages: learning or mapping, localization, planning, and navigation. Therefore, it is paramount to model the environment optimally to perform the aforementioned stages. In this paper, we propose a novel framework to generate a visual map for environments both indoors and outdoors. The visual map comprises key images sharing visual information between consecutive key images. This learning stage employs a pre-trained local feature transformer (LoFTR) constrained with a 3D projective transformation (a fundamental matrix) between two consecutive key images. Outliers are efficiently detected using marginalizing sample consensus (MAGSAC) while estimating the fundamental matrix. We conducted extensive experiments to validate our approach in six different datasets and compare its performance against hand-crafted methods.
更多
查看译文
关键词
deep learning,local feature matching,mobile robot,monocular vision,visual map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要