A Semantic Interpretation Method for Deep Neural Networks Based on Knowledge Graphs

Liu Jingjing,Xu Song, Wang Lina

2022 China Automation Congress (CAC)(2022)

引用 0|浏览0
暂无评分
摘要
Despite the great success of deep neural networks in many fields, the lack of interpretability has severely limited their wide application in security-sensitive tasks. Although current interpretable methods of deep neural networks such as visualization, class activation mapping, and sensitivity analysis can help users intuitively understand the inner working mechanism of the neural networks to some extent, they are either too coarse in explanation or too complex in explanation form to read easily. In order to use semantic information which is more understandable and close to human thought to interpret the deep neural network and increase the readability of the interpretation,we propose a semantic interpretation method of deep neural network based on knowledge graph. The method takes the VGG16 network as an example, and by mining the key neuronsof the neural network, construct semantic dictionaries and knowledge maps of key neurons, and automatically generate human-understandable semantic explanatory statements based on the knowledge maps. The method provides a new idea to improve the transparency of the operation process of deep neural networks, and also provides a clearer reference basis for pruning and tuning of deep neural networks.
更多
查看译文
关键词
deep neural networks,interpretability,neurons,comprehensibility
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要