Distributionally Robust Off-Dynamics Reinforcement Learning: Provable Efficiency with Linear Function Approximation
CoRR(2024)
摘要
We study off-dynamics Reinforcement Learning (RL), where the policy is
trained on a source domain and deployed to a distinct target domain. We aim to
solve this problem via online distributionally robust Markov decision processes
(DRMDPs), where the learning algorithm actively interacts with the source
domain while seeking the optimal performance under the worst possible dynamics
that is within an uncertainty set of the source domain's transition kernel. We
provide the first study on online DRMDPs with function approximation for
off-dynamics RL. We find that DRMDPs' dual formulation can induce nonlinearity,
even when the nominal transition kernel is linear, leading to error
propagation. By designing a d-rectangular uncertainty set using the total
variation distance, we remove this additional nonlinearity and bypass the error
propagation. We then introduce DR-LSVI-UCB, the first provably efficient online
DRMDP algorithm for off-dynamics RL with function approximation, and establish
a polynomial suboptimality bound that is independent of the state and action
space sizes. Our work makes the first step towards a deeper understanding of
the provable efficiency of online DRMDPs with linear function approximation.
Finally, we substantiate the performance and robustness of DR-LSVI-UCB through
different numerical experiments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要