Identifying psychological features of robots that encourage and discourage trust

Jason E. Plaks, Laura Bustos Rodriguez, Reem Ayad

Computers in Human Behavior(2022)

引用 8|浏览20
暂无评分
摘要
Trust is a significant predictor of humans' willingness to engage with robots. What increases – and decreases – human-robot trust? In contrast with research that has focused on robots' physical features and gestures, the present study examined psychological features. We operationalized trust as the willingness to make oneself vulnerable to potential exploitation. Participants (N = 811) played two rounds of an online Repeated Prisoner's Dilemma game against a robotic or human counterpart. The counterpart was randomly varied to display high versus low levels of four theoretically derived dimensions of humanness: Values, Autonomy, Social Connection, and Self-Aware Emotions. Varying the robotic counterpart's expressed commitment to Values from low to high increased participants' likelihood of choosing the cooperative option. In contrast, varying the robot's Self-Aware Emotions from low to high increased participants' likelihood of choosing the competitive option. These data suggest that imbuing a robot with a commitment to moral principles fosters higher trust that the robot will not choose the exploitative option, whereas imbuing a robot with a high level of emotional self-awareness hinders this type of trust. This work represents a starting point for the development of a more comprehensive model of the psychology of human-robot trust.
更多
查看译文
关键词
Human-robot interaction,Trust,Morality,Emotions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要