VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation

IEEE International Conference on Robotics and Automation(2022)

引用 33|浏览54
暂无评分
摘要
While imitation learning for vision-based au-tonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state-action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or-worse yet-the demonstrator does not have access to the platform at all? Is imitation learning for vision-based autonomous navigation even possible in such scenarios? In this work, we hypothesize that the answer is yes and that recent ideas from the Imitation from Observation (IfO) literature can be brought to bear such that a robot can learn to navigate using only ego-centric video collected by a demonstrator, even in the presence of viewpoint mismatch. To this end, we introduce a new algorithm, Visual-Observation-only Imitation Learning for Autonomous navigation (VOILA), that can successfully learn navigation policies from a single video demonstration collected from a physically different agent. We evaluate VOILA in the AirSim simulator and show that VOILA not only successfully imitates the expert, but that it also learns navigation policies that can generalize to novel environments. Further, we demonstrate the effectiveness of VOILA in a real-world setting by showing that it allows a wheeled Jackal robot to successfully imitate a human walking in an environment while recording video with a handheld mobile phone camera.
更多
查看译文
关键词
deployment platform,demonstration signals,imitation learning,vision-based autonomous navigation,recent ideas,Observation literature,ego-centric video,Visual-Observation-only,VOILA,navigation policies,single video demonstration,wheeled Jackal robot,vision-based au-tonomous mobile robot navigation,research community,state-action demonstrations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要