Probability pooling for dependent agents in collective learning

Artificial Intelligence(2020)

Cited 2|Views16
No score
Abstract
The use of copulas is proposed as a way of modelling dependencies between different agents' probability judgements when carrying out probability pooling. This is combined with an established Bayesian model in which pooling is viewed as a form of updating on the basis of probability values provided by different individuals. Adopting the Frank family of copulas we investigate the effect of different assumed levels of comonotonic dependence between individuals, in the context of a collective learning problem in which a population of agents must reach consensus on which of two mutually exclusive and exhaustive hypotheses is true. In this scenario agents receive evidence from two sources; directly from the environment and also from other agents in the form of probability judgements. They then apply Bayesian updating to the former and probability pooling to the latter. We carry out multi-agent simulation experiments and show that optimal population level performance is obtained under the assumption of some degree of comonotonicity between agents, and consequently show that the standard assumption of agent independence is suboptimal. This is found to be particularly true of scenarios where there is a large amount of noise and very low amounts of direct evidence. Finally, we investigate dynamic environments in which the true state of the world changes and show that identifying the optimal level of agent dependency has an even greater effect on performance than for static environments in which the true state remains constant.
More
Translated text
Key words
Probability pooling,Copulas,Collective learning,Dependent agents
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined