Empirical study of a vision-based depth-sensitive human-computer interaction system.

APCHI '12: Asia Pacific Conference on Computer Human Interaction Matsue-city Shimane Japan August, 2012(2012)

引用 11|浏览17
暂无评分
摘要
This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and efficiency by decreasing the ambient disturbing factors such as noise or light condition dependency. In our prototype, we designed a capable algorithm using NITE toolkit to recognize arm gestures. Finally, through a comprehensive user experiment we compared our natural arm gestures to the conventional input devices (mouse/keyboard), for simple and complicated tasks, and in two different situations (small and big-screen displays) for precision, efficiency, ease-of-use, pleasantness, fatigue, naturalness, and overall satisfaction to verify the following hypothesis: on a WIMP user interface, the gesture-based input is superior to mouse/keyboard when using big-screen. Our empirical investigation also proves that gestures are more natural and pleasant to be used than mouse/keyboard. However, arm gestures can cause more fatigue than mouse.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要