How guilty is a robot who kills other robots?

2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA(2020)

引用 0|浏览13
暂无评分
摘要
Safety may depends crucially on making moral judgments. To date we have a lack of knowledge about the possibility of intervening in the processes that lead to moral judgments in relation to the behavior of artificial agents. The study reported here involved 293 students from the University of Siena who made moral judgments after reading the description of an event in which a person or robot killed other people or robots. The study was conducted through an online questionnaire. The results suggest that moral judgments essentially depend on the type of victim and that are different if they involve human or artificial agents. Furthermore, some characteristics of the evaluators, such as the greater or lesser disposition to attribute mental states to artificial agents, have an influence on these evaluations. On the other hand, the level of familiarity with these systems seems to have a limited effect.
更多
查看译文
关键词
ethics,responsibility judgments,attribution of mental states,artificial agents,education
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要