Trust-No-Pixel: A Remarkably Simple Defense against Adversarial Attacks Based on Massive Inpainting

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 0|浏览5
暂无评分
摘要
Deep Learning systems, able to achieve significant breakthroughs in many fields, including computer vision and speech recognition, are not inherently secure. Adversarial attacks on computer vision models can craft slightly perturbed inputs that exploit the models' multi-dimensional boundary shape to dramatically reduce their performance without compromising the perception that human beings have of such input. In this work, we present Trust-No-Pixel, a novel plug-and-play strategy to harden neural network image classifiers from adversarial attacks, based on a massive inpainting strategy. The inpainting technique of our defense performs a total erase of the input image and its reconstruction from scratch. Our experiments show Trust-No-Pixel improved accuracy against the more challenging type of such attacks, namely the white box adversarial attacks. Moreover, an exhaustive comparison of our technique against state-of-the-art approaches taken from academic literature confirmed the solid defense performances of Trust-No-Pixel under a wide variety of scenarios, including different attacks and attacked network architectures.
更多
查看译文
关键词
Adversarial attacks,inpainting,DeepFill
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要