Evolution Of Cooperation Through Genetic Collective Learning And Imitation In Multiagent Societies

2018 CONFERENCE ON ARTIFICIAL LIFE (ALIFE 2018)(2018)

引用 1|浏览2
暂无评分
摘要
How to facilitate the evolution of cooperation is a key question in multi-agent systems and game-theoretical situations. Individual reinforcement learners often fail to learn coordinated behavior. Using an evolutionary approach for selection can produce optimal behavior but may require significant computational efforts. Social imitation of behavior causes weak coordination in a society. Our goal in this paper is to improve the behavior of agents with reduced computational effort by combining evolutionary techniques, collective learning, and social imitation techniques. We designed a genetic algorithm based cooperation framework equipped with these techniques in order to solve particular coordination games in complex multi-agent networks. In this framework, offspring agents inherit more successful behavior selected from game-playing parent agents, and all agents in the network improve their performance through collective reinforcement learning and social imitation. Experiments are carried out to test the proposed framework and compare the performance with previous work. Experimental results show that the framework is more effective for the evolution of cooperation in complex multi-agent social systems than either evolutionary, reinforcement learning or imitation system on their own.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要