Axiomatic Aggregations of Abductive Explanations

Gagan Biradar, Yacine Izza,Elita Lobo,Vignesh Viswanathan,Yair Zick

AAAI 2024(2024)

引用 0|浏览4
暂无评分
摘要
The recent criticisms of the robustness of post hoc model approximation explanation methods (like LIME and SHAP) have led to the rise of model-precise abductive explanations. For each data point, abductive explanations provide a minimal subset of features that are sufficient to generate the outcome. While theoretically sound and rigorous, abductive explanations suffer from a major issue --- there can be several valid abductive explanations for the same data point. In such cases, providing a single abductive explanation can be insufficient; on the other hand, providing all valid abductive explanations can be incomprehensible due to their size. In this work, we solve this issue by aggregating the many possible abductive explanations into feature importance scores. We propose three aggregation methods: two based on power indices from cooperative game theory and a third based on a well-known measure of causal strength. We characterize these three methods axiomatically, showing that each of them uniquely satisfies a set of desirable properties. We also evaluate them on multiple datasets and show that these explanations are robust to the attacks that fool SHAP and LIME.
更多
查看译文
关键词
ML: Transparent, Interpretable, Explainable ML,GTEP: Cooperative Game Theory,CSO: Satisfiability Modulo Theories,CSO: Satisfiability,KRR: Diagnosis and Abductive Reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要