Curriculum Learning Based Multi-Agent Path Finding for Complex Environments.

IJCNN(2023)

Cited 0|Views18
No score
Abstract
Multi-agent reinforcement learning (MARL) is a promising tool to solve the Multi-Agent Path Finding (MAPF) task, which aims to find conflict-free paths for multiple agents, one for each agent, from a start position to a goal position. It uses global information to learn a mechanism for cooperation among agents by maximising the cumulative team rewards, which are often very sparse. However, the sparsity of rewards implies that agents have to blindly explore all possible paths, which makes MARL methods difficult to converge in complex environments. To address this issue, this paper proposes a novel Curriculum based Path-finding Learning (CPL) under the framework of curriculum learning, which allows agents to start with simple skills and to learn cooperative strategies stage-by-stage for more efficient training. Specifically, CPL divides the training process into three stages and speeds up the learning by changing the difficulty of the tasks from easy to hard. Experiments on random obstacle grid worlds show that our proposed method performs significantly better in terms of success rate and makespan than state-of-the-art learning-based methods.
More
Translated text
Key words
Multi-agent path finding, Multi-agent reinforcement learning, Curriculum learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined