How to Facilitate Explainability of AI for Increased User Trust - Results of a Study with a COVID-19 Risk Calculator.

MIPRO(2021)

引用 0|浏览4
暂无评分
摘要
While the market of smart technologies is steadily increasing, there is much research to be done regarding the interaction between human users and Artificial Intelligence (AI) technologies. Specifically, the field of Explainable Artificial Intelligence (XAI) focuses on making AI explainable to users. To provide a user-centered approach to this growing field, this paper describes a study to investigate possible processes and methods. For this purpose, 20 participants were asked to use an AI system that provided them with the results of a personalized COVID-19 risk calculation. The study results indicate that while participants generally seemed to think that the presented results of the system were accurate, only a few said that they would change their behavior after receiving the results, and many asked for additional information to better understand the results. This paper discusses the findings along with possible approaches to increase behavior change in users of smart systems.
更多
查看译文
关键词
Human Factors,Artificial Intelligence,Explainable Artificial Intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要