Adversarial example generation with adaptive gradient search for single and ensemble deep neural network.

Information Sciences(2020)

引用 25|浏览46
暂无评分
摘要
Deep Neural Networks (DNNs) have achieved remarkable success in specific domains, such as computer vision, audio processing, and natural language processing. However, researches indicate that deep neural networks are facing many security issues (e.g., adversarial attack, information forgery). In the field of image classification, adversarial samples generated by specific adversarial attack strategies can easily fool deep neural classification models into making unreliable predictions. We find that such adversarial attack algorithms induce large-scale pixel modifications in crafted images to maintain the effectiveness of the adversarial attack. Massive pixel modifications change the inherent characteristics of generated examples and cause large image distortion. To address the mentioned issues, we introduce an adaptive gradient-based adversarial attack method named Adaptive Iteration Fast Gradient Method (AI-FGM), which focuses on seeking the input’s preceding gradient and adjusts the accumulation of perturbed entity adaptively for performing adversarial attacks. By maximizing the specific loss for generating adaptive gradient-based entities, AI-FGM calls for several gradient-based operators on the clean input to map crafted sample with the corresponding prediction directly. AI-FGM helps to reduce unnecessary gradient-based entity accumulation when processing adversary by adaptive gradient-based seeking strategy. Experimental results show that AI-FGM outperforms other gradient-based adversarial attackers in attacking deep neural classification models with fewer pixel modifications (AMP is 0.0017 with L2 norm in fooling Inception-v3) and higher success rate of invasion on deep neural classification networks in white-box and black-box attack strategy on public image datasets with different resolution.
更多
查看译文
关键词
Deep neural networks,Adversarial attack,Adaptive gradient,Perturbation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要