ELM: A Fast Explainability Approach for Extreme Learning Machines

Brandon Warner, Edward Ratner,Amaury Lendasse

ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II(2023)

引用 0|浏览1
暂无评分
摘要
In recent years, Explainable Artificial Intelligence (XAI) has emerged as one of the key specializations in Machine Learning (ML) research. XAI has gained significant interest in the last decade due in part to the reluctance of sensitive domains to adopt the use of "black box" models (i.e., models that use ambiguous or obfuscated reasoning). This motivation has led to the rediscovery of Shapley values [1], a method originally applied to coalitional game theory to optimally distribute the "payout" (i.e., importance) of the "players" (i.e., features) of a model. More recently, Lundberg and Lee developed a sophisticated methodology to approximate Shapley values by computing SHAP (SHapley Additive exPlanations) [2] values. SHAP uses the coefficients from local linear models created for every sample in a test set, providing robust sample-level, model-agnostic explainability. Calculating global SHAP values is computationally expensive, requiring models to be built for every sample in a test or validation set. To address these tractability concerns, we propose eXplainable Extreme Learning Machine (X-ELM) values, which can be computed using coefficients of ELM parameters to wholistically evaluate the global importance of each feature in a dataset using a single ELM ensemble model. We compare the extracted ELM coefficients to values extracted using SHAP methods to show that our approach yields values comparable to the state-of-the-art (SOTA) game theoretic approaches at a dramatically lower computational cost.
更多
查看译文
关键词
Machine Learning,Extreme Learning Machine,Variable Importance,Feature Importance,Explainable Artificial Intelligence,XAI,Interpretability,Explainability,Comprehensibility,Black-box
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要