Interpretable Visualizations of Deep Neural Networks for Domain Generation Algorithm Detection

2020 IEEE Symposium on Visualization for Cyber Security (VizSec)(2020)

引用 8|浏览12
暂无评分
摘要
Due to their success in many application areas, deep learning models have found wide adoption for many problems. However, their black-box nature makes it hard to trust their decisions and to evaluate their line of reasoning. In the field of cybersecurity, this lack of trust and understanding poses a significant challenge for the utilization of deep learning models. Thus, we present a visual analytics system that provides designers of deep learning models for the classification of domain generation algorithms with understandable interpretations of their model. We cluster the activations of the model's nodes and leverage decision trees to explain these clusters. In combination with a 2D projection, the user can explore how the model views the data at different layers. In a preliminary evaluation of our system, we show how it can be employed to better understand misclassifications, identify potential biases and reason about the role different layers in a model may play.
更多
查看译文
关键词
Explainable artificial intelligence (XAI),visual analytics,model visualization,DGA detection,cybersecurity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要