ProtoGE: Prototype Goal Encodings for Multi-goal Reinforcement Learning

4th Multidisciplinary Conference on Reinforcement Learning and Decision Making(2019)

引用 2|浏览91
暂无评分
摘要
Current approaches to multi-goal reinforcement learning train the agent directly on the desired goal space. When goals are sparse, binary and coarsely defined, with each goal representing a set of states, this has at least two downsides. First, transitions between different goals may be sparse, making it difficult for the agent to obtain useful control signals, even using Hindsight Experience Replay [1]. Second, having trained only on the desired goal representation, it is difficult to transfer learning to other goal spaces.We propose the following simple idea: instead of training on the desired coarse goal space, substitute it with a finer—more specific—goal space, perhaps even the agent’s state space (the “state-goal” space), and use Prototype Goal Encodings (“ProtoGE”) to encode coarse goals as fine ones. This has several advantages. First, an agent trained on an appropriately fine goal space receives more descriptive control signals and can learn to accomplish goals in its desired goal space significantly faster. Second, finer goal representations are more flexible and allow for efficient transfer. The state-goal representation in particular, is universal: an agent trained on the state-goal space can potentially adapt to arbitrary goals, so long as a Protoge map is available. We provide empirical evidence for the above claims and establish a new state-ofthe-art in standard multi-goal MuJoCo environments.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要