Forms of Understanding of XAI-Explanations.

Hendrik Buschmeier,Heike M. Buhl,Friederike Kern,Angela Grimminger, Helen Beierling, Josephine B. Fisher,André Groß,Ilona Horwath, Nils Klowait, Stefan Lazarov, Michael Lenke, Vivien Lohmer,Katharina J. Rohlfing, Ingrid Scharlau, Amit Singh, Lutz Terfloth,Anna-Lisa Vollmer, Yu Wang, Annedore Wilmes,Britta Wrede

CoRR(2023)

引用 0|浏览2
暂无评分
摘要
Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) 'understanding' on the part of the explainee. However, what it means to 'understand' is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding in the context of XAI and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, 'knowing how' to do or decide something, and comprehension, 'knowing that' -- both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要