Exploring Interpretable XAI Algorithms for Image Classification and Prediction Explanations

Dmytro Furman,Marián Mach, Dominik Vranay,Peter Sinčák

2023 World Symposium on Digital Intelligence for Systems and Machines (DISA)(2023)

引用 0|浏览6
暂无评分
摘要
This paper investigates various eXplainable Artificial Intelligence (XAI) algorithms and how they simplify image classification tasks and provide interpretable explanations for the predictions made by classifiers. Specifically, we examine the Glance and Focus Network (GFNet), which employs image segmentation to facilitate classification simplification, and the novel Local Interpretable Model-Agnostic Explanations (LIME) technique, which locally learns an interpretable model to explain predictions. Additionally, the study incorporates the utilization of the CARE framework, which animates attentional convolutional neural networks through transformations, and Integrated Gradient, a method based on semantic segmentation. The primary focus of these XAI methods lies in object detection within images and the visualization of regions responsible for the identification of specific features by trained neural networks. To bridge the gap between technical experts and end-users, this thesis endeavours to develop a user-friendly interface that facilitates experimentation with convolutional neural networks and enhances understanding of their behaviour. By promoting accessibility and comprehensibility in artificial intelligence, a broader audience will be able to engage with AI systems and gain a clearer understanding of their potential benefits and limitations. This research contributes to the responsible and ethical development and implementation of artificial intelligence systems.
更多
查看译文
关键词
explainable artificial intelligence,web application,neural networks,machine learning,artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要