Revisiting the Deep Learning-Based Eavesdropping Attacks via Facial Dynamics from VR Motion Sensors.

ICICS(2023)

引用 0|浏览1
暂无评分
摘要
Virtual Reality (VR) Head Mounted Display’s (HMD) are equipped with a range of sensors, which have been recently exploited to infer users’ sensitive and private information through a deep learning-based eavesdropping attack that leverage facial dynamics. Mindful that the eavesdropping attack employs facial dynamics, which vary across race and gender, we evaluate the robustness of such attack under various users characteristics. We base our evaluation on the existing anthropological research that shows statistically significant differences for face width, length, and lip length among ethnic/racial groups, suggesting that a “challenger” with similar features (ethnicity/race and gender) to a victim might be able to more easily deceive the eavesdropper than when they have different features. By replicating the classification model in [17] and examining its accuracy with six different scenarios that vary the victim and attacker based on their ethnicity/race and gender, we show that our adversary is able to impersonate a user with the same ethnicity/race and gender more accurately, with an average accuracy difference between the original and adversarial setting being the lowest among all scenarios. Similarly, an adversary with different ethnicity/race and gender than the victim had the highest average accuracy difference, emphasizing an inherent bias in the fundamentals of the approach through impersonation.
更多
查看译文
关键词
eavesdropping attacks,facial dynamics,learning-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要