Abstract Interpretation-Based Feature Importance for Support Vector Machines

VERIFICATION, MODEL CHECKING, AND ABSTRACT INTERPRETATION, VMCAI 2024, PT I(2024)

引用 0|浏览0
暂无评分
摘要
We study how a symbolic representation for support vector machines (SVMs) specified by means of abstract interpretation can be exploited for: (1) enhancing the interpretability of SVMs through a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset or the accuracy of the SVM and is very fast to compute; and (2) certifying individual fairness of SVMs and producing concrete counterexamples when this verification fails. We implemented our methodology and we empirically showed its effectiveness on SVMs based on linear and nonlinear (polynomial and radial basis function) kernels. Our experimental results prove that, independently of the accuracy of the SVM, our AFI measure correlates much strongly with stability of the SVM to feature perturbations than major feature importance measures available in machine learning software such as permutation feature importance, therefore providing better insight into the trustworthiness of SVMs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要