EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses
CoRR(2024)
摘要
In this paper, we introduce EyeEcho, a minimally-obtrusive acoustic sensing
system designed to enable glasses to continuously monitor facial expressions.
It utilizes two pairs of speakers and microphones mounted on glasses, to emit
encoded inaudible acoustic signals directed towards the face, capturing subtle
skin deformations associated with facial expressions. The reflected signals are
processed through a customized machine-learning pipeline to estimate full
facial movements. EyeEcho samples at 83.3 Hz with a relatively low power
consumption of 167 mW. Our user study involving 12 participants demonstrates
that, with just four minutes of training data, EyeEcho achieves highly accurate
tracking performance across different real-world scenarios, including sitting,
walking, and after remounting the devices. Additionally, a semi-in-the-wild
study involving 10 participants further validates EyeEcho's performance in
naturalistic scenarios while participants engage in various daily activities.
Finally, we showcase EyeEcho's potential to be deployed on a
commercial-off-the-shelf (COTS) smartphone, offering real-time facial
expression tracking.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要