Agent transparency in mixed-initiative multi-UxV control: How should intelligent agent collaborators speak their minds?

Computers in Human Behavior(2023)

引用 0|浏览0
暂无评分
摘要
Multiple unmanned system control is a complex command and control endeavor but pairing human operators with an intelligent agent (IA) teammate can buttress the collection and synthesis of data and improve complex decision making. Effective human-autonomy teams (HATs) require human trust in IA teammates to be properly calibrated, which can be supported by communications pertaining to underlying functions of the IA, or “transparency”. One prominent guide for application of transparency is Chen and colleague's Situation awareness-based Agent Transparency (SAT) model. This effort sought to extend understanding of the application of this model by manipulating secondary transparency communication parameters: face threat (i.e., threat to a person's sense of social standing) and design of transparency communication (verbal, graphical, and iconographical). Results revealed that increasing face threat can improve reliance calibration at low transparency but may be detrimental when transparency is high. Outcomes concerning the method of transparency communication suggest that while verbal communication of transparency information is sufficient and even preferred when a low level of transparency is provided, reliance on graphical and iconographical approaches for presenting transparency information increases at a higher level of transparency.
更多
查看译文
关键词
Human-autonomy teaming,Transparency,Face threat,Trust,Reliance,Unmanned systems,Decision making,Decision aids,Interface design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要