A Deep Reinforcement Learning Approach To Marginalized Importance Sampling With The Successor Representation

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139(2021)

引用 16|浏览95
暂无评分
摘要
Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.
更多
查看译文
关键词
marginalized importance sampling,deep reinforcement learning approach,successor representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要