MasKGrasp: Mask-based Grasping for Scenes with Multiple General Real-world Objects

2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2022)

引用 0|浏览4
暂无评分
摘要
In this paper, we introduce a mask-based grasping method that discerns multiple objects within the scene regardless of transparency or specularity and finds the optimal grasp position avoiding clutter. Conventional vision-based robotic grasping approaches often fail to extend to the scenes containing transparent objects due to their different visual appearance. To handle the different visual characteristics, we first segment both transparent and opaque objects into instance masks, which serve as the domain-agnostic intermediate representation of both object types, using a neural network. While there exists no labelled training dataset that strongly represents both object types, we overcome the limitation by augmenting transparent objects on an existing large-scale dataset. Then, given the object instance masks, our method selects the top K discrete masks and robustly estimates grasp poses avoiding clutter. Through experiments, we verify that the instance masks are light-weight yet provide sufficient information for vision-based grasping agnostic of various appearances. On an unseen real-world test environment with complex objects, our method substantially outperforms previous methods without fine-tuning.
更多
查看译文
关键词
grasping,objects,scenes,real-world real-world,mask-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要