Actual Trust in Multiagent Systems.

International Joint Conference on Autonomous Agents & Multiagent Systems(2024)

Cited 0|Views5
No score
Abstract
We study how trust can be established in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of trust based on agents' capacity to deliver tasks in prospect. Unlike reputation-based trust, we consider the specific setting in which agents interact and model a forward-looking notion of trust. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, we contribute to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined