Towards Query-Efficient Adversarial Attacks Against Automatic Speech Recognition Systems

IEEE Transactions on Information Forensics and Security(2021)

引用 50|浏览81
暂无评分
摘要
Adversarial attacks, which attract explosive rese- arch attention in recent years, have achieved fantastic success in fooling neural networks, especially for image-classification tasks. While for automatic speech recognition (ASR) tasks, the state-of-the-arts mainly focus on white-box attacks where the adversary is assumed to get full access to the details inside the system, e.g., network architecture, weights, etc. However, this assumption does not hold in practice. The construction of real-world adversarial examples against ASR systems is still a very challenging problem. In this paper, we, for the first time, present a novel and effective attack on ASR systems, named Selective Gradient Estimation Attack (SGEA). Compared with prior literatures, SGEA only needs limited access to the output probabilities of neural networks, and achieves extremely high efficiency and success rates. We attacked the DeepSpeech system on Mozilla Common Voice and LibriSpeech datasets in our experiments. The results demonstrate that SGEA improves the attack success rate from 35% to 98%, while reducing the number of queries by 66%.
更多
查看译文
关键词
Speech recognition,adversarial attack,neural network,gradient estimation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要