Multitask Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies.

neural information processing systems(2018)

引用 24|浏览47
暂无评分
摘要
We introduce a new RL problem where the agent is required to execute a given subtask graph which describes a set of subtasks and their dependency. Unlike existing approaches that explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships among them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute. To solve this problem, we propose a neural subtask graph solver (NSS) which encodes the subtask graph using a recursive neural network. To overcome the difficulty of training, we propose a novel non-parametric gradient-based policy to pre-train our NSS agent and further finetune it through actor-critic method. The experimental results on two 2D visual domains show that our agent can perform a complex reasoning to find the optimal way of executing the subtask graph and generalize well to the unseen subtask graphs. In addition, we compare our agent with a Monte-Carlo tree search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of NSS can be further improved by combining it with MCTS.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要