Geometry Matching for Multi-Embodiment Grasping
CoRR(2023)
摘要
Many existing learning-based grasping approaches concentrate on a single
embodiment, provide limited generalization to higher DoF end-effectors and
cannot capture a diverse set of grasp modes. We tackle the problem of grasping
using multiple embodiments by learning rich geometric representations for both
objects and end-effectors using Graph Neural Networks. Our novel method -
GeoMatch - applies supervised learning on grasping data from multiple
embodiments, learning end-to-end contact point likelihood maps as well as
conditional autoregressive predictions of grasps keypoint-by-keypoint. We
compare our method against baselines that support multiple embodiments. Our
approach performs better across three end-effectors, while also producing
diverse grasps. Examples, including real robot demos, can be found at
geo-match.github.io.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要