Learning Reliable Visual Saliency for Model Explanations

IEEE Transactions on Multimedia(2020)

引用 28|浏览115
暂无评分
摘要
By highlighting important features that contribute to model prediction, visual saliency is used as a natural form to interpret the working mechanism of deep neural networks. Numerous methods have been proposed to achieve better saliency results. However, we find that previous visual saliency methods are not reliable enough to provide meaningful interpretation through a simple sanity check: saliency methods are required to explain the output of non-maximum prediction classes, which are usually not ground-truth classes. For example, let the methods interpret an image of “dog” given a wrong class label “fish” as the query. This procedure can test whether these methods reliably interpret model's predictions based on existing features that appear in the data. Our experiments show that previous methods failed to pass the test by generating similar saliency maps or scattered patterns. This false saliency response can be dangerous in certain scenarios, such as medical diagnosis. We find that these failure cases are mainly due to the attribution vanishing and adversarial noise within these methods. In order to learn reliable visual saliency, we propose a simple method that requires the output of the model to be close to the original output while learning an explanatory saliency mask. To enhance the smoothness of the optimized saliency masks, we then propose a simple Hierarchical Attribution Fusion (HAF) technique. In order to fully evaluate the reliability of visual saliency methods, we propose a new task Disturbed Weakly Supervised Object Localization (D-WSOL) to measure whether these methods can correctly attribute the model's output to existing features. Experiments show that previous methods fail to meet this standard, and our approach helps to improve the reliability by suppressing false saliency responses. After observing a significant layout difference in saliency masks between real and adversarial samples. we propose to train a simple CNN on these learned hierarchical attribution masks to distinguish adversarial samples. Experiments show that our method can improve detection performance over other approaches significantly.
更多
查看译文
关键词
Visualization,Reliability,Predictive models,Task analysis,Perturbation methods,Backpropagation,Real-time systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要