Robot Learning Physical Object Properties from Human Visual Cues: A novel approach to infer the fullness level in containers

IEEE International Conference on Robotics and Automation(2022)

引用 2|浏览24
暂无评分
摘要
For collaborative tasks, involving handovers, humans are able to exploit visual, non-verbal cues, to infer physical object properties, like mass, to modulate their actions. In this paper, we investigate how the different levels of liquid inside a cup can be inferred from the observation of the movement of the person handling the cup. We model this mechanism from human experiments and incorporate it in an online human-to-robot handover. Finally, we provide a new dataset with human eye+head+hand motion data for human-to-human handovers and human pick-and-place of a cup with three levels of liquid: empty, half-full, and full of water. Our results show that it is possible to model (non-verbal) signals exchanged by humans during interaction and classify the level of water inside the cup being handed over.
更多
查看译文
关键词
cup,robot learning physical object properties,human visual cues,fullness level,collaborative tasks,nonverbal cues,human experiments,human-to-robot handover,human eye+head+hand motion data,human-to-human handovers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要