Relaxed Policy Iteration Algorithm for Nonlinear Zero-Sum Games With Application to H-Infinity Control

IEEE TRANSACTIONS ON AUTOMATIC CONTROL(2024)

引用 0|浏览2
暂无评分
摘要
Though policy evaluation error profoundly affects the direction of policy optimization and the convergence property, it is usually ignored in policy iteration methods. This work incorporates the practical inexact policy evaluation into a simultaneous policy update paradigm to reach the Nash equilibrium of the nonlinear zero-sum games. In the proposed algorithm, the restriction of precise policy evaluation is removed by bounded evaluation error characterized by Hamiltonian without sacrificing convergence guarantees. By exploiting Frechet differential, the practical iterative process of value function with estimation error is converted into the Newton's method with variable steps, which are inversely proportional to evaluation errors. Accordingly, we construct a monotone scalar sequence that shares the same Newton's method with the value sequence to bound the error of the value function, which enjoys an exponential convergence rate. Numerical results show its convergence in affine systems, and the potential to cope with general nonlinear plants.
更多
查看译文
关键词
Game theory,Games,Convergence,Mathematical models,Approximation algorithms,Newton method,Partial differential equations,Hamilton-Jacobi-Isaacs (HJI) equation,Newton's method,policy iteration,zero-sum game
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要