Natural Emergence of Heterogeneous Strategies in Artificially Intelligent Competitive Teams

ICSI (1)(2021)

引用 4|浏览4
暂无评分
摘要
Multi agent strategies in mixed cooperative-competitive environments can be hard to craft by hand because each agent needs to coordinate with its teammates while competing with its opponents. Learning based algorithms are appealing but they require a competitive opponent to train against, which is often not available. Many scenarios require heterogeneous agent behavior for the team’s success and this increases the complexity of the learning algorithm. In this work, we develop a mixed cooperative-competitive multi agent environment called FortAttack in which two teams compete against each other for success. We show that modeling agents with Graph Neural Networks (GNNs) and training them with Reinforcement Learning (RL) from scratch, leads to the co-evolution of increasingly complex strategies for each team. Through competition in Multi-Agent Reinforcement Learning (MARL), we observe a natural emergence of heterogeneous behavior among homogeneous agents when such behavior can lead to the team’s success. Such heterogeneous behavior from homogeneous agents is appealing because any agent can replace the role of another agent at test time. Finally, we propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents. We were able to train a large number of agents on a commodity laptop, which shows the scalability and efficiency of our approach. The code and a video presentation are available online (Code: https://github.com/Ankur-Deka/Emergent-Multiagent-Strategies, Video: https://youtu.be/ltHgKYc0F-E).
更多
查看译文
关键词
Multi-Agent Reinforcement Learning (MARL),Graph Neural Networks (GNNs),Co-evolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要