Explainable AI for Software Defect Prediction with Gradient Boosting Classifier

2022 7th International Conference on Computer Science and Engineering (UBMK)(2022)

引用 2|浏览0
暂无评分
摘要
Explainability is one of the most investigated quality attributes and nowadays, it has an increasing interest of the stakeholders using Artificial Intelligence (AI), especially Machine Learning software. Since AI-based software is different from traditional software in having a black-box nature, it has become very important to understand the logic behind the predictions made. In this study, we focus on the explainability of Gradient Boosting (GB) classifier used for software defect prediction (SDP). We apply post-hoc model-agnostic methods, namely “Explain like I am a 5-year old” (ELl5), “Local Interpretable Model Agnostic Explanations” (LIME), and “SHapley Additive exPlanations” (SHAP), over an SDP dataset offered by NASA, in order to shed light on the explainability of GB classifier. More specifically, we use ELI5 and LIME to explain instances locally, and SHAP to get both local and global explanations. The results suggest a post-hoc and model-agnostic way to quantify explainability, and indicate that all three methods used in this study performed consistent results with each other while explaining the GB model.
更多
查看译文
关键词
explainability,artificial intelligence,XAI,software defect prediction,post-hoc methods,ELI5,SHAP,LIME
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要