Eyemotion: Classifying facial expressions in VR using eye-tracking cameras

2019 IEEE Winter Conference on Applications of Computer Vision (WACV)(2017)

引用 80|浏览123
暂无评分
摘要
One of the main challenges of social interaction in virtual reality settings is that head-mounted displays occlude a large portion of the face, blocking facial expressions and thereby restricting social engagement cues among users. Hence, auxiliary means of sensing and conveying these expressions are needed. We present an algorithm to automatically infer expressions by analyzing only a partially occluded face while the user is engaged in a virtual reality experience. Specifically, we show that images of the user's eyes captured from an IR gaze-tracking camera within a VR headset are sufficient to infer a select subset of facial expressions without the use of any fixed external camera. Using these inferences, we can generate dynamic avatars in real-time which function as an expressive surrogate for the user. We propose a novel data collection pipeline as well as a novel approach for increasing CNN accuracy via personalization. Our results show a mean accuracy of 74% ($F1$ of 0.73) among 5 `emotive' expressions and a mean accuracy of 70% ($F1$ of 0.68) among 10 distinct facial action units, outperforming human raters.
更多
查看译文
关键词
fixed external camera,expressive surrogate,facial expressions,eye-tracking cameras,social interaction,virtual reality settings,head-mounted displays,social engagement cues,partially occluded face,virtual reality experience,IR gaze-tracking camera,VR headset,facial action units,CNN accuracy,personalization,emotive expressions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要