WeChat Mini Program
Old Version Features

Comparative Study of Visual SLAM-Based Mobile Robot Localization Using Fiducial Markers

Computing Research Repository (CoRR)(2023)

Cited 1|Views22
Abstract
This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects.
More
Translated text
Key words
mobile robot localization,markers,slam-based
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文对比研究了基于视觉SLAM及fiducial markers的移动机器人定位的三种模式,探讨了它们在不同条件下的性能及对地图质量变化的容忍度。

方法】:通过比较仅使用SLAM、SLAM与先验地图结合、仅使用先验地图定位这三种方式,评估了fiducial markers在移动机器人定位中的效果。

实验】:使用手持相机捕获的包含多个fiducial markers的室内图像序列进行评估,实验关注绝对轨迹误差和每帧优化过程的运行时间,并通过改变先验地图质量来检验各模式的容忍度。实验结果显示三种模式轨迹误差水平一致,仅使用先验地图定位模式运行时间最短,但在地图质量扰动下,该模式性能退化,而结合先验地图的SLAM模式维持了性能。