Generalizable Task Planning Through Representation Pretraining

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

引用 7|浏览72
暂无评分
摘要
The ability to plan for multi-step manipulation tasks in unseen situations is crucial for future home robots. But collecting sufficient experience data for end-to-end learning is often infeasible in the real world, as deploying robots in many environments can be prohibitively expensive. On the other hand, large-scale scene understanding datasets contain diverse and rich semantic and geometric information. But how to leverage such information for manipulation remains an open problem. In this letter, we propose a learning-to-plan method that can generalize to new object instances by leveraging object-level representations extracted from a synthetic scene understanding dataset. We evaluate our method with a suite of challenging multi-step manipulation tasks inspired by household activities (Srivastava, et al., 2022) and show that our model achieves measurably better success rate than state-of-the-art end-to-end approaches.
更多
查看译文
关键词
Integrated planning and learning, task and motion planning, representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要