Explainable Artificial Intelligence for Smart Grid Intrusion Detection Systems

IT Professional(2022)

引用 3|浏览11
暂无评分
摘要
A popular approach to overcome the complexity of cybersecurity and sophistication of cyber attacks is implementing artificial intelligence (AI)-based security controls that integrate machine learning (ML) algorithms into security controls, such as intrusion and malware detection. These AI-based security controls are considered more effective than traditional signature-based and heuristics-based controls. However, the growing adoption of advanced ML algorithms is turning these AI-based security controls into black-box systems. We postulate that these black-box AI methods would make risk management and informed decision-making challenging. Using smart grid intrusion detection as our context, we illustrate our arguments by outlining a risk assessment plan to discuss the transparency and interpretability of an AI-based security control. We contribute to the literature by changing the focus from performance to explainability of algorithms, highlighting critical steps in explainability for integrating into risk assessment planning, and outlining the implications of explainability in AI-based intrusion detection systems.
更多
查看译文
关键词
Industries,Privacy,Machine learning algorithms,Closed box,Intrusion detection,Control systems,Turning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要