An Explainable Intelligence Model for Security Event Analysis.

Australasian Conference on Artificial Intelligence(2019)

引用 4|浏览0
暂无评分
摘要
Huge volume of events is logged by monitoring systems. Analysts do not audit or trace the log files, which record the most significant events, until an incident occurs. Human analysis is a tedious and inaccurate task given the vast volume of log files that are stored in a "machine-friendly" format. The analysts have to derive the context for an incident using the prior knowledge to find relevant events to the incident to recognise why it has happened. Although the security tools by providing visualization techniques and minimizing human interactions have been developed to make the process of analysis easier, far too little attention has been paid to interpret security incident in a "human-friendly" format. Besides, the current detection patterns and rules are not mature enough to recognize early breaches, which have not caused any damage. In this paper, we presented an Explainable AI model that assist the analysts' judgement to infer what is happened from the security event logs. The proposed Explainable AI model includes storytelling as a novel knowledge representation model to present the sequence of the events which automatically are discovered from the log file. For automated discovering sequential events, an apriority-like algorithm by mining temporal patterns is utilized. This effort focused on security events to convey both short-life and long-life activities. The experimental results demonstrate the potential and advantages of the proposed Explainable AI model from the security logs that validated on the activities during the security configuration compliance on Windows system.
更多
查看译文
关键词
Security events, Storytelling, Periodic frequent item set
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要