RD2: Reward Decomposition with Representation Disentanglement

NIPS'20 Proceedings of the 34th International Conference on Neural Information Processing Systems(2020)

引用 2|浏览15
暂无评分
摘要
Reward decomposition, which aims to decompose the full reward into multiple sub-rewards, has been proven beneficial for improving sample efficiency in reinforcement learning. Existing works on discovering reward decomposition are mostly policy dependent, which constrains diversified or disentangled behavior between different policies induced by different sub-rewards. In this work, we propose a set of novel policy-independent reward decomposition principles by constraining uniqueness and compactness of different state representations relevant to different sub-rewards. Our principles encourage sub-rewards with minimal relevant features, while maintaining the uniqueness of each sub-reward. We derive a deep learning algorithm based on our principle, and refer to our method as RD$^2$, since we learn reward decomposition and disentangled representation jointly. RD$^2$ is evaluated on a toy case, where we have the true reward structure, and chosen Atari environments where the reward structure exists but is unknown to the agent to demonstrate the effectiveness of RD$^2$ against existing reward decomposition methods.
更多
查看译文
关键词
Reinforcement learning,Deep learning,Uniqueness,Artificial intelligence,Computer science,Compact space
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要