Real-time gaze prediction in virtual reality.

ACM SIGMM Conference on Multimedia Systems (MMSys)(2022)

引用 2|浏览11
暂无评分
摘要
Gaze is an important indicator of visual attention and knowledge of gaze location can be used to improve and augment Virtual Reality (VR) experiences. This has led to the development of VR Head Mounted Displays (HMD) with inbuilt gaze trackers. Given the latency constraints of VR, foreknowledge of gaze, i.e., before it is reported by the gaze tracker, can similarly be leveraged to preemptively apply gaze-based improvements and augmentations to a VR experience, especially in distributed VR architectures. In this paper, we propose a light weight neural network based method utilizing only past HMD pose and gaze data to predict future gaze locations, forgoing computationally heavy saliency computation. Most work in this domain has focused on either 360°or ego-centric video or synthetic VR content with rather naive interaction dynamics like free viewing or supervised visual search tasks. Our solution considers data from the exhaustive OpenNEEDs dataset which contains 6 Degrees of Freedom (6DoF) data captured in VR experiences with subjects given the freedom to explore the VR scene and/or to engage in tasks. Our solution outperforms the very strict baseline: current gaze to predict gaze in real-time for sub 150ms prediction horizons for VR use-cases.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要