Bridging The Appearance Gap: Multi-Experience Localization For Long-Term Visual Teach And Repeat

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2016)

引用 75|浏览55
暂无评分
摘要
Vision-based, route-following algorithms enable autonomous robots to repeat manually taught paths over long distances using inexpensive vision sensors. However, these methods struggle with long-term, outdoor operation due to the challenges of environmental appearance change caused by lighting, weather, and seasons. While techniques exist to address appearance change by using multiple experiences over different environmental conditions, they either provide topological-only localization, require several manually taught experiences in different conditions, or require extensive offline mapping to produce metric localization. For real-world use, we would like to localize metrically to a single manually taught route and gather additional visual experiences during autonomous operations. Accordingly, we propose a novel multi-experience localization (MEL) algorithm developed specifically for route following applications; it provides continuous, six-degree-of-freedom (6DOF) localization with relative uncertainty to a privileged (manually taught) path using several experiences simultaneously. We validate our algorithm through two experiments: i) an offline performance analysis on a 9km subset of a challenging 27km route-traversal dataset and ii) an online field trial where we demonstrate autonomy on a small 250m loop over the course of a sunny day. Both exhibit significant appearance change due to lighting variation. Through these experiments we show that safe localization can he achieved by bridging the appearance gap.
更多
查看译文
关键词
environmental appearance,multiexperience localization,MEL algorithm,visual teach and repeat,VT&R,vision-based algorithm,route-following algorithm,autonomous robot,vision sensor
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要