On the Linear Convergence of Policy Gradient under Hadamard Parameterization

Jiacai Liu,Jinchi Chen,Ke Wei

arXiv (Cornell University)(2023)

引用 0|浏览15
暂无评分
摘要
The convergence of deterministic policy gradient under the Hadamard parametrization is studied in the tabular setting and the global linear convergence of the algorithm is established. To this end, we first show that the error decreases at an $O(\frac{1}{k})$ rate for all the iterations. Based on this result, we further show that the algorithm has a faster local linear convergence rate after $k_0$ iterations, where $k_0$ is a constant that only depends on the MDP problem and the step size. Overall, the algorithm displays a linear convergence rate for all the iterations with a loose constant than that for the local linear convergence rate.
更多
查看译文
关键词
policy gradient,hadamard
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要