Explainable machine learning-based cybersecurity detection using LIME and Secml

Sawsan Alodibat,Ashraf Ahmad,Mohammad Azzeh

2023 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT)(2023)

引用 0|浏览3
暂无评分
摘要
The field of Explainable Artificial Intelligence (XAI) has gained significant momentum in recent years. This discipline is focused on developing novel approaches to explain and interpret the functioning of machine learning algorithms. As machine learning techniques increasingly adopt “black box” methods, there is growing confusion about how these algorithms work and make decisions. This uncertainty has made it challenging to implement machine learning in sensitive and critical fields. To address this issue, research in machine learning interpretability has become crucial. One particular area that requires attention is the detection process and classification of malware. Handling and preparing data for malware detection poses significant difficulties for machine learning algorithms. Thus, explainability is a critical requirement in current research. Our research leverages XAI, a novel design of explainable artificial intelligence that uses cybersecurity data to gain knowledge about the composition of malware from the Microsoft large benchmark dataset-Microsoft Malware Classification Challenge (BIG 2015). We use the LIME explainability technique and the Secml python library to develop explainable prediction results about the class of malware. We achieved 94% accuracy using Decision Tree classifier.
更多
查看译文
关键词
Explainability,Machne learning,Microsoft malware dataset,XAI,LIME,Secml,cyber-security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要