Offline Multi-task Transfer RL with Representational Penalization
CoRR(2024)
摘要
We study the problem of representation transfer in offline Reinforcement
Learning (RL), where a learner has access to episodic data from a number of
source tasks collected a priori, and aims to learn a shared representation to
be used in finding a good policy for a target task. Unlike in online RL where
the agent interacts with the environment while learning a policy, in the
offline setting there cannot be such interactions in either the source tasks or
the target task; thus multi-task offline RL can suffer from incomplete
coverage.
We propose an algorithm to compute pointwise uncertainty measures for the
learnt representation, and establish a data-dependent upper bound for the
suboptimality of the learnt policy for the target task. Our algorithm leverages
the collective exploration done by source tasks to mitigate poor coverage at
some points by a few tasks, thus overcoming the limitation of needing uniformly
good coverage for a meaningful transfer by existing offline algorithms. We
complement our theoretical results with empirical evaluation on a
rich-observation MDP which requires many samples for complete coverage. Our
findings illustrate the benefits of penalizing and quantifying the uncertainty
in the learnt representation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要