Towards adversarial realism and robust learning for IoT intrusion detection and classification

arxiv(2023)

引用 1|浏览4
暂无评分
摘要
The internet of things (IoT) faces tremendous security challenges. Machine learning models can be used to tackle the growing number of cyber-attack variations targeting IoT systems, but the increasing threat posed by adversarial attacks restates the need for reliable defense strategies. This work describes the types of constraints required for a realistic adversarial cyber-attack example and proposes a methodology for a trustworthy adversarial robustness analysis with a realistic adversarial evasion attack vector. The proposed methodology was used to evaluate three supervised algorithms, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM), and one unsupervised algorithm, isolation forest (IFOR). Constrained adversarial examples were generated with the adaptative perturbation pattern method (A2PM), and evasion attacks were performed against models created with regular and adversarial training. Even though RF was the least affected in binary classification, XGB consistently achieved the highest accuracy in multi-class classification. The obtained results evidence the inherent susceptibility of tree-based algorithms and ensembles to adversarial evasion attacks and demonstrate the benefits of adversarial training and a security-by-design approach for a more robust IoT network intrusion detection and cyber-attack classification.
更多
查看译文
关键词
Adversarial attacks,Adversarial robustness,Machine learning,Tabular data,Internet of things,Intrusion detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要