The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

arxiv(2020)

引用 18|浏览280
暂无评分
摘要
For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.
更多
查看译文
关键词
shapley values,explanations,minimal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要