Context-Based Multimodal Output for Human-Robot Collaboration

2018 11th International Conference on Human System Interaction (HSI)(2018)

引用 1|浏览3
暂无评分
摘要
Research on multimodal systems for human-robot interaction mostly focuses on the processing of inputs. Yet, the output is equally important: A robot that is able to use different modalities in an interaction appears more natural and can be understood more easily. In this paper, we present our multimodal fission framework, called MMF framework, which is a framework for incorporating planning criteria to select the most suitable set of modalities based on information about the interaction context. We describe our input and output layer, present an algorithm for an automated selection of suitable attributes for referencing objects verbally as well as a simple assessment of the suitability of pointing gestures in the given context. Furthermore, we describe a new approach for the modality and device selection as formulation of constraint optimization problems. In the end, we will report the results of a user study, which has been conducted to evaluate the generated multimodal output.
更多
查看译文
关键词
Multimodal Fission,Multimodal Reference Generation,Modality Selection,Human-Robot Collaboration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要