Pay Attention via Binarization: Enhancing Explainability of Neural Networks via Binarization of Activation.

Yuma Tashiro,Hiromitsu Awano

ISCAS(2022)

引用 1|浏览1
暂无评分
摘要
Modern deep learning algorithms consist of highly complex artificial neural networks, making it extremely difficult for humans to track the inference process. While the social implementation of deep learning is progressing, the human and economic losses caused by inference errors are becoming more and more problematic, and there is a need for methods to explain the basis for the decisions of deep learning algorithms. Although, in an automated driving task, a method to visualize the regions that contribute to steering angle prediction using an attention mechanism has been proposed, its explanatory capability is still low. In this paper, we focus on the difference in the importance of each bit in the activation (i.e., the LSBs have the lowest weight while the MSBs have the highest weight), and propose a method to add attention only to the sign bits to further enhance the explanation. Our numerical experiment using the Udacity dataset revealed that the proposed method achieves 33% higher area under curve (AUC) in terms of the deletion metric.
更多
查看译文
关键词
binarization,explainability,neural networks,attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要