O(T^-1) Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games

arxiv(2024)

引用 0|浏览1
暂无评分
摘要
No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of O(T^-1), a significant improvement over the O(1/√(T)) rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find O(T^-1)-approximate (coarse) correlated equilibria in full-information general-sum Markov games within T iterations. Numerical results are also included to corroborate our theoretical findings.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要