Trust Region Policy Optimization
CoRR(2015)
摘要
We describe an iterative procedure for optimizing policies, with guaranteed
monotonic improvement. By making several approximations to the
theoretically-justified procedure, we develop a practical algorithm, called
Trust Region Policy Optimization (TRPO). This algorithm is similar to natural
policy gradient methods and is effective for optimizing large nonlinear
policies such as neural networks. Our experiments demonstrate its robust
performance on a wide variety of tasks: learning simulated robotic swimming,
hopping, and walking gaits; and playing Atari games using images of the screen
as input. Despite its approximations that deviate from the theory, TRPO tends
to give monotonic improvement, with little tuning of hyperparameters.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络