Adversarial Examples for Deep Learning Cyber Security Analytics

arxiv(2019)

引用 0|浏览83
暂无评分
摘要
As advances in Deep Neural Networks demonstrate unprecedented levels of performance in many critical applications, their vulnerability to attacks is still an open question. Adversarial examples are small modifications of legitimate data points, resulting in mis-classification at testing time. As Deep Neural Networks found a wide range of applications to cyber security analytics, it becomes important to study the robustness of these models in this setting. We consider adversarial testing-time attacks against Deep Learning models designed for cyber security applications. In security applications, machine learning models are not typically trained directly on the raw network traffic or security logs, but on intermediate features defined by domain experts. Existing attacks applied directly to the intermediate feature representation result in violation of feature constraints, leading to invalid adversarial examples. We propose a general framework for crafting adversarial attacks that takes into consideration the mathematical dependencies between intermediate features in model input vector, as well as physical constraints imposed by the applications. We apply our methods on two security applications, a malicious connection and a malicious domain classifier, to generate feasible adversarial examples in these domains. We show that with minimal effort (e.g., generating 12 network connections), an attacker can change the prediction of a model from Malicious to Benign. We extensively evaluate the success of our attacks, and how they depend on several optimization objectives and imbalance ratios in the training data.
更多
查看译文
关键词
Adversarial machine learning, evasion attacks, feed-forward neural networks, constrained environment, network traffic botnet classification, domain classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要