Graph Exploration for Effective Multiagent Q-Learning.

Ainur Zhaikhan,Ali H Sayed

IEEE transactions on neural networks and learning systems(2024)

引用 0|浏览2
暂无评分
摘要
This article proposes an exploration technique for multiagent reinforcement learning (MARL) with graph-based communication among agents. We assume that the individual rewards received by the agents are independent of the actions by the other agents, while their policies are coupled. In the proposed framework, neighboring agents collaborate to estimate the uncertainty about the state-action space in order to execute more efficient explorative behavior. Different from existing works, the proposed algorithm does not require counting mechanisms and can be applied to continuous-state environments without requiring complex conversion techniques. Moreover, the proposed scheme allows agents to communicate in a fully decentralized manner with minimal information exchange. And for continuous-state scenarios, each agent needs to exchange only a single parameter vector. The performance of the algorithm is verified with theoretical results for discrete-state scenarios and with experiments for the continuous ones.
更多
查看译文
关键词
Continuous state space,exploration,multiagent reinforcement learning (MARL),parallel Markov decision process (MDP)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要