GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network

arxiv(2024)

引用 0|浏览11
暂无评分
摘要
Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. While existing methods have utilized differences between clean and adversarial samples to expose these perturbations, most are limited to a single model, rendering them vulnerable to adaptive attacks. To address the problem, we propose Graph-Guided Testing (GGT), a multiple-model-based detection algorithm that generates diverse models guided by graph characteristics. GGT identifies adversarial samples by their instability on the multi-model decision boundaries. GGT is highly efficient, with the generated model requiring only about 5% of the floating-point operations of the original model. Our experiments demonstrate that GGT outperforms state-of-the-art methods against adaptive attacks. We release our code at https://github.com/implicitDeclaration/graph-guided-testing
更多
查看译文
关键词
Adversarial Samples Detection,Graph Structure,Adversarial Attack,Model Pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要