Exploring Mental Models for Explainable Artificial Intelligence: Engaging Cross-disciplinary Teams Using a Design Thinking Approach.

HCI (40)(2023)

引用 0|浏览2
暂无评分
摘要
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users’ understanding of AI systems. It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.
更多
查看译文
关键词
explainable artificial intelligence,design thinking approach,mental models,artificial intelligence,cross-disciplinary
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要