Increasing System Transparency About Medical AI Recommendations May Not Improve Clinical Experts’ Decision Quality

Social Science Research Network(2021)

引用 0|浏览1
暂无评分
摘要
Medical AI systems generate personalized recommendations to improve patient care, but it is unclear how system transparency affects how clinicians incorporate AI recommendations into care decisions. We employ mixed methods, combining semi-structured interviews and two computer-based experiments to examine factors posited to support proper system use. In the interviews from Study 1, clinicians expressed that features like the level of confidence and explanations for AI recommendations would increase adoption, consistent with the general literature on the use of recommender systems. To evaluate this within the clinical context, we conducted a pair of experiments where kidney transplant experts were faced with decision tasks. In Study 2, participants received AI recommendations for drug dosing and were shown (or not) the confidence level and an explanation. In Study 3, participants were shown explanations (or not) and received two patient cases each with either a high- or low-quality AI recommendation. Contrary to theoretical predictions, providing explanations did not uniformly increase adoption of AI recommendations or improve clinical decision quality. Instead, explanations increased adoption of low-quality AI recommendations and decreased the adoption of high-quality recommendations. The results also revealed significant differences in the behaviors of physicians and non-physicians regarding the use of AI advice.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要