GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples

Neurocomputing(2023)

引用 2|浏览8
暂无评分
摘要
Deep neural networks (DNNs) are susceptible to adversarial attacks that add perturbations to the input data, leading to misclassification errors and causing machine-learning systems to fail. For defense, adversarial training leverages possible crashing inputs, i.e., adversarial examples; but, the input space of DNNs is enormous and high-dimensional, making it difficult to find in a wide range. Coverage-guided fuzzing is promising in this respect. However, this leaves the question of what coverage metrics are appropriate for DNNs. We observed that the abilities of existing coverage metrics are limited. They lack gradual guidance toward crashes because of a simple search for a wide neuron activation area. None of the existing approaches can simultaneously achieve high crash quantity, high crash diversity, and efficient fuzzing time. Apart from this, the evaluation methodologies adopted by state-of-the-art fuzzers need rigorous improvements. To address these problems, we present a new DNN fuzzer named GradFuzz. Our idea is the gradient vector coverage, which provides gradual guidance to misclassified categories. We implemented our system and performed experiments under rigorous evaluation methodologies. Our evaluation results indicate that GradFuzz outperforms state-of-the-art DNN fuzzers: GradFuzz can locate a more diverse set of errors, beneficial to adversarial training, on the MNIST and CIFAR-10 datasets without sacrificing both crash quantity and fuzzing efficiency.
更多
查看译文
关键词
Deep learning security,Coverage-guided DNN fuzzing,Gradient vector coverage
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要