Local interpretation techniques for machine learning methods: Theoretical background, pitfalls and interpretation of LIME and Shapley values

crossref(2023)

引用 0|浏览0
暂无评分
摘要
Machine learning methods have become popular in psychological research. To predict the outcome variable, machine learning methods use complex functions to describe non-linear and higher order interaction effects. However, researchers from psychology are used to parametric models, such as linear or logistic regression, where parameters can be clearly interpreted, while machine learning methods often lack such interpretable parameters. To gain insights into how the machine learning method has made its predictions, different interpretation techniques have been proposed. They support researchers in understanding, for example, which variables have been important for the machine learning predictions. In this article, we focus on two local interpretation techniques that are widely used in machine learning: Local Interpretable Model-Agnostic Explanations (LIME) and Shapley values. LIME aims at explaining machine learning predictions in the close neighborhood of selected persons. Shapley values can be understood as a measure of predictor relevance or contribution of predictor variables for specific persons. Using two illustrative, simulated examples, we explain the idea behind LIME and Shapley, demonstrate their characteristics, and discuss challenges that might arise in their application and interpretation. For LIME, we demonstrate how the choice of the size of the neighborhood may impact conclusions. For Shapley values, we show how they can be interpreted jointly for all persons in the sample, and discuss similarities with global interpretation techniques. The aim of this article is to support researchers to safely use these interpretation techniques themselves, but also to critically evaluate interpretations when they encounter the interpretation techniques in research articles.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要