Modular Networks Prevent Catastrophic Interference in Model-Based Multi-task Reinforcement Learning

MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE (LOD 2021), PT II(2022)

引用 1|浏览11
暂无评分
摘要
In a multi-task reinforcement learning setting, the learner commonly benefits from training on multiple related tasks by exploiting similarities among them. At the same time, the trained agent is able to solve a wider range of different problems. While this effect is well documented for model-free multi-task methods, we demonstrate a detrimental effect when using a single learned dynamics model for multiple tasks. Thus, we address the fundamental question of whether model-based multi-task reinforcement learning benefits from shared dynamics models in a similar way model-free methods do from shared policy networks. Using a single dynamics model, we see clear evidence of task confusion and reduced performance. As a remedy, enforcing an internal structure for the learned dynamics model by training isolated sub-networks for each task notably improves performance while using the same amount of parameters. We illustrate our findings by comparing both methods on a simple gridworld and a more complex vizdoom multi-task experiment.
更多
查看译文
关键词
Model-based reinforcement learning, Multi-task reinforcement learning, Latent space models, Catastrophic interference, Task confusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要