Improving the Security of Audio CAPTCHAs With Adversarial Examples.

Ping Wang,Haichang Gao,Xiaoyan Guo, Zhongni Yuan, Jiawei Nian

IEEE Trans. Dependable Secur. Comput.(2024)

引用 1|浏览1
暂无评分
摘要
CAPTCHAs (completely automated public Turing tests to tell computers and humans apart) have been the main protection against malicious attacks on public systems for many years. Audio CAPTCHAs, as one of the most important CAPTCHA forms, provide an effective test for visually impaired users. However, in recent years, most of the existing audio CAPTCHAs have been successfully attacked by machine learning-based audio recognition algorithms, showing their insecurity. In this article, a generative adversarial network (GAN)-based method is proposed to generate adversarial audio CAPTCHAs. This method is implemented by using a generator to synthesize noise, a discriminator to make it similar to the target and a threshold function to limit the size of the perturbation; then, the synthetic perturbation is combined with the original audio to generate the adversarial audio CAPTCHA. The experimental results demonstrate that the addition of adversarial examples can greatly reduce the recognition accuracy of automatic models and improve the robustness of different types of audio CAPTCHAs. We also explore ensemble learning strategies to improve the transferability of the proposed adversarial audio CAPTCHA methods. To investigate the effect of adversarial CAPTCHAs on human users, a user study is also conducted.
更多
查看译文
关键词
Audio CAPTCHA,reCAPTCHA,adversarial examples,generative adversarial networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要