How to Explain It to a Model Manager? - A Qualitative User Study About Understandability, Trustworthiness, Actionability, and Action Efficacy.

Helmut Degen,Christof J. Budnik, Ralf Gross, Marcel Rothering

HCI (40)(2023)

引用 0|浏览1
暂无评分
摘要
In the context of explainable AI (XAI), little research has been done to show how user role specific explanations look like. This research aims to find out the explanation needs for a user role called “model manager”, a user monitoring multiple AI-based systems for quality assurance in manufacturing. The question this research attempts to answer is what are the explainability needs of the model manager. By using a design analysis technique (task questions), a concept (UI mockup) was created in a controlled way. Additionally, a causal chain model was created and used as an assumed representation of the mental model for explanations. Furthermore, several options of confidence levels were explored. In a qualitative user study (cognitive walkthrough) with ten participants, it was investigated which explanations are needed to support understandability, trustworthiness, and actionability. The research concludes four findings: F1) A mental model for explanations is an effective way to identify uncertainty addressing explanation content that addresses target user group specific needs. F2) “AI domain” and “application domain” explanations are identified as new explanation categories. F3) “show your work” and “singular” explanations are identified as new explanation categories. F4) “actionability” is identified as a new explanation quality.
更多
查看译文
关键词
model manager
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要