Effects of AI and Logic-Style Explanations on Users' Decisions Under Different Levels of Uncertainty

ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS(2023)

引用 1|浏览2
暂无评分
摘要
Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, although previous work evaluates the users' understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This article addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles for classification tasks. We conducted two separate studies: one requesting participants to recognize handwritten digits and one to classify the sentiment of reviews. To assess the decision making, we analyzed the task performance, agreement with the AI suggestion, and the user's reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design) according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users' classification decisions considering the analyzedmetrics. In both domains (images and text), users reliedmainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to overreliance on the AI advice in both domains-it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.
更多
查看译文
关键词
Explainable AI,user uncertainty,AI uncertainty,AI correctness,explanations,logical reasoning,MNIST,Yelp Reviews,neural networks,CNNs,intelligent user interfaces
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要