Automated Extraction of Security Profile Information from XAI Outcomes.

Sharmin Jahan, Sarra M. Alqahtani,Rose F. Gamble, Masrufa Bayesh

2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)(2023)

引用 0|浏览0
暂无评分
摘要
Security applications use machine learning (ML) models and artificial intelligence (AI) to autonomously protect systems. However, security decisions are more impactful if they are coupled with their rationale. The explanation behind an ML model's result provides the rationale necessary for a security decision. Explainable AI (XAI) techniques provide insights into the state of a model's attributes and their contribution to the model's results to gain the end user's confidence. It requires human intervention to investigate and interpret the explanation. The interpretation must align system's security profile(s). A security profile is an abstraction of the system's security requirements and related functionalities to comply with them. Relying on human intervention for interpretation is infeasible for an autonomous system (AS) since it must self-adapt its functionalities in response to uncertainty at runtime. Thus, an AS requires an automated approach to extract security profile information from ML model XAI outcomes. The challenge is unifying the XAI outcomes with the security profile to represent the interpretation in a structured form. This paper presents a component to facilitate AS information extraction from ML model XAI outcomes related to predictions and generating an interpretation considering the security profile.
更多
查看译文
关键词
Security profile,deep learning,explainable AI,self-adaption,autonomous systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要