Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
arxiv(2023)
摘要
There is an emerging interest in generating robust counterfactual
explanations that would remain valid if the model is updated or changed even
slightly. Towards finding robust counterfactuals, existing literature often
assumes that the original model m and the new model M are bounded in the
parameter space, i.e., Params(M)-Params(m)<Δ.
However, models can often change significantly in the parameter space with
little to no change in their predictions or accuracy on the given dataset. In
this work, we introduce a mathematical abstraction termed
naturally-occurring model change, which allows for arbitrary changes
in the parameter space such that the change in predictions on points that lie
on the data manifold is limited. Next, we propose a measure – that we call
Stability – to quantify the robustness of counterfactuals to
potential model changes for differentiable models, e.g., neural networks. Our
main contribution is to show that counterfactuals with sufficiently high value
of Stability as defined by our measure will remain valid after
potential naturally-occurring model changes with high probability
(leveraging concentration bounds for Lipschitz function of independent
Gaussians). Since our quantification depends on the local Lipschitz constant
around a data point which is not always available, we also examine practical
relaxations of our proposed measure and demonstrate experimentally how they can
be incorporated to find robust counterfactuals for neural networks that are
close, realistic, and remain valid after potential model changes. This work
also has interesting connections with model multiplicity, also known as, the
Rashomon effect.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要