Nebula: Self-Attention for Dynamic Malware Analysis

CoRR(2023)

引用 0|浏览11
暂无评分
摘要
Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment, and storing their actions in log reports. Previous work has started training machine learning models on such reports to perform either malware detection or malware classification. However, most of the approaches (i) have only considered convolutional and long-short term memory networks, (ii) they have been built focusing only on APIs called at runtime, without considering other relevant though heterogeneous sources of information like network and file operations, and (iii) the code and pretrained models are hardly available, hindering reproducibility of results in this research area. In this work, we overcome these limitations by presenting Nebula, a versatile, self-attention transformer-based neural architecture that can generalize across different behavior representations and formats, combining heterogeneous information from dynamic log reports. We show the efficacy of Nebula on three distinct data collections from different dynamic analysis platforms, comparing its performance with previous state-of-the-art models developed for malware detection and classification tasks. We produce an extensive ablation study that showcases how the components of Nebula influence its predictive performance, while enabling it to outperform some competing approaches at very low false positive rates. We conclude our work by inspecting the behavior of Nebula through the application of explainability methods, which highlight that Nebula correctly focuses more on portions of reports that contain malicious activities. We release our code and models at github.com/dtrizna/nebula.
更多
查看译文
关键词
dynamic malware analysis,self-attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要