MoMA: Model-based Mirror Ascent for Offline Reinforcement Learning
CoRR(2024)
摘要
Model-based offline reinforcement learning methods (RL) have achieved
state-of-the-art performance in many decision-making problems thanks to their
sample efficiency and generalizability. Despite these advancements, existing
model-based offline RL approaches either focus on theoretical studies without
developing practical algorithms or rely on a restricted parametric policy
space, thus not fully leveraging the advantages of an unrestricted policy space
inherent to model-based methods. To address this limitation, we develop MoMA, a
model-based mirror ascent algorithm with general function approximations under
partial coverage of offline data. MoMA distinguishes itself from existing
literature by employing an unrestricted policy class. In each iteration, MoMA
conservatively estimates the value function by a minimization procedure within
a confidence set of transition models in the policy evaluation step, then
updates the policy with general function approximations instead of
commonly-used parametric policy classes in the policy improvement step. Under
some mild assumptions, we establish theoretical guarantees of MoMA by proving
an upper bound on the suboptimality of the returned policy. We also provide a
practically implementable, approximate version of the algorithm. The
effectiveness of MoMA is demonstrated via numerical studies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要