Continuous And Embedded Learning For Multi-Agent Systems

2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12(2006)

引用 5|浏览10
暂无评分
摘要
This paper describes multi-agent strategies for applying Continuous and Embedded Learning (CEL). In the CEL architecture [1], an agent maintains a simulator based on its current knowledge of the world and applies a learning algorithm that obtains its performance measure using this simulator. The simulator is updated to reflect changes in the environment or robot state that can be detected by a monitor, such as sensor failures. In this paper, we adapt this architecture to a multi-agent setting in which the monitor is communicated among the team members effectively creating a distributed monitor. The parameters of the current control algorithm (in our case rulebases learned by Genetic Algorithms) used by all of the agents are added to the monitor as well, allowing for cooperative learning. We show that communication of agent status (e.g. failures) among the team members allows the agents to dynamically adapt to team properties, in this case team size. Furthermore, we show that an agent is able to switch between specializing within a section of the domain when there are many team members and generalizing to other parts of the domain when the rest of the team members are disabled. Finally, we also discuss future potential of this method, most notably in the creation of a Distributed Case Based Reasoning system in which the cases are actual Genetic Algorithm population members that can be swapped among team members.
更多
查看译文
关键词
genetic algorithms, multi-agent coevolutionary systems, cooperative coevolution, multi-agent communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要