A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

arxiv(2023)

引用 3|浏览4
暂无评分
摘要
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts a strategy that approximates such values with less computational effort. However, we see two weaknesses in Kernel SHAP: its formulation is difficult to understand and it does not consider further game theory assumptions that could reduce the computational cost. Therefore, in this paper, we propose a novel approach that addresses such weaknesses. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Thereafter, we propose to adopt the concept of k-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
更多
查看译文
关键词
Local interpretability,Choquet integral,Machine learning,Shapley values
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要