Augmented Visual Localization Using a Monocular Camera for Autonomous Mobile Robots.

IEEE Conference on Automation Science and Engineering (CASE)(2022)

引用 1|浏览1
暂无评分
摘要
A visual localization method utilizing a fisheye monocular camera is proposed to enhance navigation accuracy of autonomous mobile robots in indoor environments for warehouse or service robotics applications. Existing visual infrastructure-aided localization algorithms take advantage of uniquely colored or lit robots that limit their application to ideal lighting conditions, occlusion-free scenarios or multi-modal fusion with stereo vision, LiDAR, and inertial sensors which inevitably increases their complexity. Using fisheye monocular vision imposes challenges such as depth estimation, frame warping, and low accuracy of the state estimation for far objects. The proposed augmented localization framework includes an uncertainty-aware state observer employing a motion model with a learning-based input estimator and point cloud clusters over a region of interest, to estimate the position of a robot while maintaining effective computational efficiency. Observability of the developed state estimator and asymptotic stability of the estimation error dynamics are also studied. Various tests including occlusion, low visibility for far objects, and noisy depth estimation (from the clustered region of interest), have been conducted in indoor settings to validate the method. The tests confirm robust performance of the augmented visual localization framework in presence of intermittent measurements due to environmental conditions or low reliability of vision-based depth estimation. Furthermore, a significant increase in accuracy and consistency of visual localization is shown without using additional stereo, inertial, or LiDAR measurements.
更多
查看译文
关键词
monocular camera,localization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要