A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)(2018)

引用 21|浏览41
暂无评分
摘要
Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions.
更多
查看译文
关键词
visualisation methods,verbal requests,human-robot interaction,multiple objects,disambiguate natural language instructions,YuMi robot,head-mounted display condition,multimodal behaviour,monitor,projector,realtime augmentations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要