Autonomous morals: Inferences of mind predict acceptance of AI behavior in sacrificial moral dilemmas

April D. Young,Andrew E. Monroe

Journal of Experimental Social Psychology(2019)

引用 40|浏览8
暂无评分
摘要
Three studies compared people's judgments of autonomous vehicles (AVs) and humans faced with a moral dilemma. Study 1 showed that, for identical decisions, AVs were judged as more blameworthy, less moral, and less trustworthy compared to humans. However, perceiving AVs as having a human-like mind reduced this difference. Study 2 extended this finding by manipulating AV mindedness. Describing AVs' decision-making capacity in mentalistic terms (relative to mechanistic terms) reduced blame and anger and fostered greater trust and perceptions of morality. Study 3, replicated these findings, and demonstrated that perceived mindedness predicted judgments of trust, morality, and willingness to purchase or ride in an AV. These findings suggest that people's moral reservations about AVs may derive from doubting that AVs have the mental capacities necessary for moral judgment, and that one route for improving trust in AVs is to design them with a veneer of human-like mental qualities.
更多
查看译文
关键词
Moral judgment,Mind perception,Artificial intelligence,Social cognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要