Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces
arxiv(2023)
摘要
We present the development and analysis of a reinforcement learning (RL)
algorithm designed to solve continuous-space mean field game (MFG) and mean
field control (MFC) problems in a unified manner. The proposed approach pairs
the actor-critic (AC) paradigm with a representation of the mean field
distribution via a parameterized score function, which can be efficiently
updated in an online fashion, and uses Langevin dynamics to obtain samples from
the resulting distribution. The AC agent and the score function are updated
iteratively to converge, either to the MFG equilibrium or the MFC optimum for a
given mean field problem, depending on the choice of learning rates. A
straightforward modification of the algorithm allows us to solve mixed mean
field control games (MFCGs). The performance of our algorithm is evaluated
using linear-quadratic benchmarks in the asymptotic infinite horizon framework.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要