Gaze Modulated Disambiguation Technique for Gesture Control in 3D Virtual Objects Selection

2017 3rd IEEE International Conference on Cybernetics (CYBCONF)(2017)

引用 8|浏览43
暂无评分
摘要
Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-to-fine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.
更多
查看译文
关键词
gaze modulated disambiguation,3D virtual objects selection,multimodal information,mid-air gesture control,target acquisition,eye tracking accuracy,gaze cone,gaze probe,multimodal interface design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要