Psychology and AI at a Crossroads: How Might Complex Systems Explain Themselves?

The American Journal of Psychology(2022)

引用 1|浏览1
暂无评分
摘要
Abstract A challenge in building useful artificial intelligence (AI) systems is that people need to understand how they work in order to achieve appropriate trust and reliance. This has become a topic of considerable interest, manifested as a surge of research on Explainable AI (XAI). Much of the research assumes a model in which the AI automatically generates an explanation and presents it to the user, whose understanding of the explanation leads to better performance. Psychological research on explanatory reasoning shows that this is a limited model. The design of XAI systems must be fully informed by a model of cognition and a model of pedagogy, based on empirical evidence of what happens when people try to explain complex systems to other people and what happens as people try to reason out how a complex system works. In this article we discuss how and why C. S. Peirce's notion of abduction is a best model for XAI. Peirce's notion of abduction as an exploratory activity can be regarded as supported by virtue of its concordance with models of expert reasoning that have been developed by modern applied cognitive psychologists.
更多
查看译文
关键词
abduction, Peirce, artificial intelligence, explanation, expert reasoning, training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要