On the interpretability of Fuzzy Cognitive Maps

Knowledge-Based Systems(2023)

引用 0|浏览4
暂无评分
摘要
This paper proposes a post-hoc explanation method for computing concept attribution in Fuzzy Cognitive Map (FCM) models used for scenario analysis, based on SHapley Additive exPlanations (SHAP) values. The proposal is inspired by the lack of approaches to exploit the often-claimed intrinsic interpretability of FCM models while considering their dynamic properties. Our method uses the initial activation values of concepts as input features, while the outputs are considered as the hidden states produced by the FCM model during the recurrent reasoning process. Hence, the relevance of neural concepts is computed taking into account the model’s dynamic properties and hidden states, which result from the interaction among the initial conditions, the weight matrix, the activation function, and the selected reasoning rule. The proposed post-hoc method can handle situations where the FCM model might not converge or converge to a unique fixed-point attractor where the final activation values of neural concepts are invariant. The effectiveness of the proposed approach is demonstrated through experiments conducted on real-world case studies.
更多
查看译文
关键词
Fuzzy Cognitive Maps,Decision making,Concept relevance,Interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要