SafePicking: Learning Safe Object Extraction via Object-Level Mapping

IEEE International Conference on Robotics and Automation(2022)

引用 7|浏览14
暂无评分
摘要
Robots need object-level scene understanding to manipulate objects while reasoning about contact, support, and occlusion among objects. Given a pile of objects, object recognition and reconstruction can identify the boundary of object instances, giving important cues as to how the objects form and support the pile. In this work, we present a system, SafePicking, that integrates object-level mapping and learning-based motion planning to generate a motion that safely extracts occluded target objects from a pile. Planning is done by learning a deep Q-network that receives observations of predicted poses and a depth-based heightmap to output a motion trajectory, trained to maximize a safety metric reward. Our results show that the observation fusion of poses and depth-sensing gives both better performance and robustness to the model. We evaluate our methods using the YCB objects in both simulation and the real world, achieving safe object extraction from piles.
更多
查看译文
关键词
SafePicking,safe object extraction,object-level mapping,object-level scene understanding,object recognition,YCB objects,learning-based motion planning,occluded target object extraction,deep Q-network learning,predicted poses,depth-based heightmap,motion trajectory,safety metric reward maximization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要