Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
CoRR(2024)
摘要
Recent years in robotics and imitation learning have shown remarkable
progress in training large-scale foundation models by leveraging data across a
multitude of embodiments. The success of such policies might lead us to wonder:
just how diverse can the robots in the training set be while still facilitating
positive transfer? In this work, we study this question in the context of
heterogeneous embodiments, examining how even seemingly very different domains,
such as robotic navigation and manipulation, can provide benefits when included
in the training data for the same model. We train a single goal-conditioned
policy that is capable of controlling robotic arms, quadcopters, quadrupeds,
and mobile bases. We then investigate the extent to which transfer can occur
across navigation and manipulation on these embodiments by framing them as a
single goal-reaching task. We find that co-training with navigation data can
enhance robustness and performance in goal-conditioned manipulation with a
wrist-mounted camera. We then deploy our policy trained only from
navigation-only and static manipulation-only data on a mobile manipulator,
showing that it can control a novel embodiment in a zero-shot manner. These
results provide evidence that large-scale robotic policies can benefit from
data collected across various embodiments. Further information and robot videos
can be found on our project website http://extreme-cross-embodiment.github.io.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要