Optimizing LIME Explanations Using REVEL Metrics.

Iván Sevillano-García,Julián Luengo,Francisco Herrera

HAIS(2023)

引用 0|浏览2
暂无评分
摘要
Explainable artificial intelligence (XAI) has emerged as a crucial topic in the field of machine learning to provide insights into the reasoning performed by artificial intelligence (AI) systems. However, the lack of a clear definition of explanation and a standard methodology for evaluating the quality of explanations has made it challenging to develop effective XAI systems. One commonly used approach is Local Linear Explanations, but the evaluation of their quality is still unclear due to theoretical inconsistencies. This issue is even more challenging in image recognition, where visual explanations often detect edges rather than providing clear explanations for decisions. To address this issue, several metrics that quantitatively measure different aspects of explanation quality in a robust and mathematically consistent manner has been proposed. On this work, we apply the REVEL framework approach, which standardizes the concept of explanation and allows for the comparison of different explanations and the absolute evaluation of individual explanations. We provide a guide of the REVEL framework to perform an optimization process that aims to improve the explainability of machine learning models. We apply the proposed five metrics on the CIFAR 10 benchmark and demonstrate their descriptive, analytical and optimization power. Our work contributes to the development of XAI systems that provide reliable and interpretable explanations for AI reasoning.
更多
查看译文
关键词
lime explanations,revel metrics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要