UniAP: Protecting Speech Privacy With Non-Targeted Universal Adversarial Perturbations.

IEEE Trans. Dependable Secur. Comput.(2024)

引用 1|浏览26
暂无评分
摘要
Ubiquitous microphones on smart devices considerably raise users’ concerns about speech privacy. Since the microphones are primarily controlled by hardware/software developers, profit-driven organizations can easily collect and analyze individuals’ daily conversations on a large scale with deep learning models, and users have no means to stop such privacy-violating behavior. In this article, we propose UniAP to empower users with the capability of protecting their speech privacy from the large-scale analysis without affecting their routine voice activities. Based on our observation of the recognition model, we utilize adversarial learning to generate quasi-imperceptible perturbations to disturb speech signals captured by nearby microphones, thus obfuscating the recognition results of recordings into meaningless contents. As validated in experiments, our perturbations can protect user privacy regardless of what users speak and when they speak. The jamming performance stability is further improved by training optimization. Additionally, the perturbations are robust against noise removal techniques. Extensive evaluations show that our perturbations achieve successful jamming rates of more than 87% in the digital domain and at least 90% and 70% for common and challenging settings, respectively, in the real-life chatting scenario. Moreover, our perturbations, solely trained on DeepSpeech, exhibit good transferability over other models based on similar architecture.
更多
查看译文
关键词
Adversarial examples,speech recognition,privacy,voice assistants
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要