A Novel Multi-Agent Parallel-Critic Network Architecture For Cooperative-Competitive Reinforcement Learning

IEEE ACCESS(2020)

引用 6|浏览27
暂无评分
摘要
Multi-agent deep reinforcement learning (MDRL) is an emerging research hotspot and application direction in the field of machine learning and artificial intelligence. MDRL covers many algorithms, rules and frameworks, it is currently researched in swarm system, energy allocation optimization, stocking analysis, sequential social dilemma, and with extremely bright future. In this paper, a parallel-critic method based on classic MDRL algorithm MADDPG is proposed to alleviate the training instability problem in cooperative-competitive multi-agent environment. Furthermore, a policy smoothing technique is introduced to our proposed method to decrease the variance of learning policies. The suggested method is evaluated in three different scenarios of authoritative multi-agent particle environment (MPE). Multiple statistical data of experimental results show that our method significantly improves the training stability and performance compared to vanilla MADDPG.
更多
查看译文
关键词
Training, Task analysis, Machine learning, Stability analysis, Games, Network architecture, Learning (artificial intelligence), Multi-agent system, deep reinforcement learning, parallel-critic architecture, training stability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要