Zero-Shot Imitating Collaborative Manipulation Plans from YouTube Cooking Videos

arxiv(2022)

引用 3|浏览15
暂无评分
摘要
People often watch videos on the web to learn how to cook new recipes, assemble furniture or repair a computer. We wish to enable robots with the very same capability. This is challenging; there is a large variation in manipulation actions and some videos even involve multiple persons, who collaborate by sharing and exchanging objects and tools. Furthermore, the learned representations need to be general enough to be transferable to robotic systems. On the other hand, previous work has shown that the space of human manipulation actions has a linguistic, hierarchical structure that relates actions to manipulated objects and tools. Building upon this theory of language for action, we propose a system for understanding and executing demonstrated action sequences from full-length, real-world cooking videos on the web. The system takes as input a new, previously unseen cooking video annotated with object labels and bounding boxes, and outputs a collaborative manipulation action plan for one or more robotic arms. We demonstrate performance of the system in a standardized dataset of 100 YouTube cooking videos, as well as in six full-length Youtube videos that include collaborative actions between two participants. We compare our system with a baseline system that consists of a state-of-the-art action detection baseline and show our system achieves higher action detection accuracy. We additionally propose an open-source platform for executing the learned plans in a simulation environment as well as with an actual robotic arm.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要