Fairness and Deception in Human Interactions with Artificial Agents

Theodor Cimpeanu,Alexander J. Stewart

arxiv(2023)

引用 0|浏览0
暂无评分
摘要
Online information ecosystems are now central to our everyday social interactions. Of the many opportunities and challenges this presents, the capacity for artificial agents to shape individual and collective human decision-making in such environments is of particular importance. In order to assess and manage the impact of artificial agents on human well-being, we must consider not only the technical capabilities of such agents, but the impact they have on human social dynamics at the individual and population level. We approach this problem by modelling the potential for artificial agents to "nudge" attitudes to fairness and cooperation in populations of human agents, who update their behavior according to a process of social learning. We show that the presence of artificial agents in a population playing the ultimatum game generates highly divergent, multi-stable outcomes in the learning dynamics of human agents' behaviour. These outcomes correspond to universal fairness (successful nudging), universal selfishness (failed nudging), and a strategy of fairness towards artificial agents and selfishness towards other human agents (unintended consequences of nudging). We then consider the consequences of human agents shifting their behavior when they are aware that they are interacting with an artificial agent. We show that under a wide range of circumstances artificial agents can achieve optimal outcomes in their interactions with human agents while avoiding deception. However we also find that, in the donation game, deception tends to make nudging easier to achieve.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要