Generating adversarial examples for DNN using pooling layers.

JOURNAL OF INTELLIGENT & FUZZY SYSTEMS(2019)

引用 1|浏览35
暂无评分
摘要
Deep Neural Network is an application of Big Data, and the robustness of Big Data is one of the most important issues. This paper proposes a newapproach named PCD for computing adversarial examples for Deep Neural Network (DNN) and increase the robustness of Big Data. In safety-critical applications, adversarial examples are big threats to the reliability of DNNs. PCD generates adversarial examples by generating different coverage of pooling functions using gradient ascent. Among the 2707 input images, PCD generates 672 adversarial examples with L-infinity distances less than 0.3. Comparing to PGD (state-of-art tool for generating adversarial examples with distances less than 0.3), PCD finds 1.5 times more adversarial examples than PGD (449) does.
更多
查看译文
关键词
Deep neural network,robustness,coverage,big data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要