GNNX-BENCH: Unravelling the Utility of Perturbation-based GNN Explainers through In-depth Benchmarking
arxiv(2023)
摘要
Numerous explainability methods have been proposed to shed light on the inner
workings of GNNs. Despite the inclusion of empirical evaluations in all the
proposed algorithms, the interrogative aspects of these evaluations lack
diversity. As a result, various facets of explainability pertaining to GNNs,
such as a comparative analysis of counterfactual reasoners, their stability to
variational factors such as different GNN architectures, noise, stochasticity
in non-convex loss surfaces, feasibility amidst domain constraints, and so
forth, have yet to be formally investigated. Motivated by this need, we present
a benchmarking study on perturbation-based explainability methods for GNNs,
aiming to systematically evaluate and compare a wide range of explainability
techniques. Among the key findings of our study, we identify the Pareto-optimal
methods that exhibit superior efficacy and stability in the presence of noise.
Nonetheless, our study reveals that all algorithms are affected by stability
issues when faced with noisy data. Furthermore, we have established that the
current generation of counterfactual explainers often fails to provide feasible
recourses due to violations of topological constraints encoded by
domain-specific considerations. Overall, this benchmarking study empowers
stakeholders in the field of GNNs with a comprehensive understanding of the
state-of-the-art explainability methods, potential research problems for
further enhancement, and the implications of their application in real-world
scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要