Show Me What To Pick: Pointing Versus Spatial Gestures for Conveying Intent

2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN(2023)

引用 0|浏览0
暂无评分
摘要
Gestures are a convenient modality of conveying human intent in collaborative human-robot tasks, and the pointing gesture is commonly used in pick-and-place tasks. But, it is hard to accurately detect the location pointed to with stereo cameras, and experiments in the literature tend to space out the objects of interest in order to make the task easier. We propose the use of gestures conveying spatial directions as an alternative to the pointing gesture when objects are closely packed, since inaccuracies in the detection of the spatial location pointed to can increase task completion difficulty. Using a human study, we confirmed that the gestures we propose are naturally used by humans collaborating with other humans when performing the task. Then, we develop a computer vision pipeline capable of generating a vector representing the pointing direction, and detecting specific spatial gestures from an RGB-D video stream. Using a self report survey, we show statistically significant evidence that subjects report higher satisfaction and better team performance when using spatial gestures instead of the pointing gesture to communicate with a robotic teammate. Finally, we show preliminary evidence that this trend holds true even when the accuracy of the pointing location detection is artificially inflated.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要