Gaze-based intention recognition for pick-and-place tasks in shared autonomy

semanticscholar(2020)

引用 0|浏览11
暂无评分
摘要
Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way, so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention, that is either the reaching target or the place-down location. We show that scan paths are, as expected, heavily shaped by the current intention and that a Gaussian Hidden Markov Model achieves a good prediction performance, while also generalizing to a new object configuration and new users. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher level planning across object configurations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要