Predicting Visual Focus of Attention From Intention in Remote Collaborative Tasks

IEEE Transactions on Multimedia(2008)

引用 22|浏览0
暂无评分
摘要
While shared visual space plays a very important role in remote collaboration on physical tasks, it is challenging and expensive to track users' focus of attention (FOA) during these tasks. In this paper, we propose to identify a user's FOA from his/her intention based on task properties, people's actions in the workspace, and conversational content. We employ a conditional Markov model to characterize a subject's FOA. We demonstrate the feasibility of the proposed method using a collaborative laboratory task in which one partner (the helper) instructs another (the worker) on how to assemble online puzzles. We model a helper's FOA using task properties, workers' actions, and conversational content. The accuracy of the model ranged from 65.40% for puzzles with easy-to-name pieces to 74.25% for puzzles with more difficult-to-name pieces. The proposed model can be used to predicate a user's FOA in a remote collaborative task without tracking the user's eye gaze.
更多
查看译文
关键词
Collaborative work,Cameras,Human computer interaction,Multimedia systems,Video sharing,Layout,Online Communities/Technical Collaboration,Laboratories,Assembly,Physics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要