Impact of Fidelity and Robustness of Machine Learning Explanations on User Trust

ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT II(2024)

引用 0|浏览2
暂无评分
摘要
EXplainable machine learning (XML) has recently emerged as a promising approach to address the inherent opacity of machine learning (ML) systems by providing insights into their reasoning processes. This paper explores the relationships among user trust, fidelity, and robustness within the context of ML explanations. To investigate these relationships, a user study is implemented within the context of predicting students' performance. The study is designed to focus on two scenarios: (1) fidelity-based scenario-exploring dynamics of user trust across different explanations at varying fidelity levels and (2) robustness-based scenario-examining dynamics of in user trust concerning robustness. For each scenario, we conduct experiments based on two different metrics, including self-reported trust and behaviour-based trust metrics. For the fidelity-based scenario, we find that users trust both high and low-fidelity explanations compared to without-fidelity explanations (no explanations) based on the behaviour-based trust results, rather than relying on the self-reported trust results. We also obtain consistent findings based on different metrics, indicating no significant differences in user trust when comparing different explanations across fidelity levels. Additionally, for the robustness-based scenario, we get contrasting results from the two metrics. The self-reported trust metric does not demonstrate any variations in user trust concerning robustness levels, whereas the behaviour-based trust metric suggests that user trust tends to be higher when robustness levels are higher.
更多
查看译文
关键词
Human computer interaction,Machine learning explanation,User trust,Fidelity,Robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要