NetSat: Network Saturation Adversarial Attack.

2023 IEEE International Conference on Big Data (BigData)(2023)

引用 0|浏览1
暂无评分
摘要
Deep Convolutional Neural Networks can be fooled with imperceptible image perturbations. This negatively affects the security of intelligent systems. It is especially important for applications, in which security is crucial, such as Automated Guided Vehicles (AGVs) and production systems. A great majority of adversarial attacks target the classification layer of the network. These attacks focus only on changing the prediction, but it is also beneficial to corrupt the deep data representations. It can result in higher confusion of neural networks, which is important for more in-depth security testing. Classifier-independent attacks could be used in the future to test other types of networks frequently used in industry, production and AGVs, e.g. object detectors. In this paper, we present Network Saturation Attack (NetSat) - a novel classifier-independent, white-box, non-targeted adversarial attack. It aims to ‘saturate’ the final convolutional feature maps with introduced patterns and thus - make the network unable to provide reasonable predictions. Classifier-independence results in samples with predictions further away from the true label and high flexibility of the method. To assess its harmfulness, we propose Dissimilarity Metric (DM). It describes how far is the predicted label from the true one in the class space. The proposed methods, due to their simplicity, flexibility and practicality, are suitable for testing of real-life systems. We release a GitHub repository with code and examples: https://github.com/iitis/NetSat-NetworkSaturationAttack.
更多
查看译文
关键词
Security and privacy of AI-based systems,Adversarial Attacks,Convolutional Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要