Beyond discounted returns: Robust Markov decision processes with average and Blackwell optimality
CoRR(2023)
摘要
Robust Markov Decision Processes (RMDPs) are a widely used framework for
sequential decision-making under parameter uncertainty. RMDPs have been
extensively studied when the objective is to maximize the discounted return,
but little is known for average optimality (optimizing the long-run average of
the rewards obtained over time) and Blackwell optimality (remaining discount
optimal for all discount factors sufficiently close to 1). In this paper, we
prove several foundational results for RMDPs beyond the discounted return. We
show that average optimal policies can be chosen stationary and deterministic
for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent
(Markovian) policies strictly outperform stationary policies for average
optimality in s-rectangular RMDPs. We also study Blackwell optimality for
sa-rectangular RMDPs, where we show that approximate Blackwell optimal
policies always exist, although Blackwell optimal policies may not exist. We
also provide a sufficient condition for their existence, which encompasses
virtually any examples from the literature. We then discuss the connection
between average and Blackwell optimality, and we describe several algorithms to
compute the optimal average return. Interestingly, our approach leverages the
connections between RMDPs and stochastic games.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要