Extraction Of Physically Plausible Support Relations To Predict And Validate Manipulation Action Effects

IEEE ROBOTICS AND AUTOMATION LETTERS(2018)

引用 25|浏览26
暂无评分
摘要
Reliable execution of robot manipulation actions in cluttered environments requires that the robot is able to understand relations between objects and reason about consequences of actions applied to these objects. We present an approach for extracting physically plausible support relations between objects based on visual information, which does not require any prior knowledge about physical object properties, e.g., mass distribution or friction coefficients. Based on a scene representation enriched by such physically plausible support relations between objects, we derive predictions about action effects. These predictions take into account uncertainty about support relations and allow applying strategies for safe bimanual object manipulation when needed. The extraction of physically plausible support relations is evaluated both in simulation and in real-world experiments using real data from a depth camera, whereas the handling of support relation uncertainties is validated on the humanoid robot ARMAR-III.
更多
查看译文
关键词
Perception for grasping and manipulation, semantic scene understanding, RGB-D perception
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要