Policy Iteration Based on a Learned Transition Model.

Vivek Ramavajjala, Charles Elkan

ECML/PKDD (2)(2012)

引用 3|浏览0
暂无评分
摘要
This paper investigates a reinforcement learning method that combines learning a model of the environment with least-squares policy iteration (LSPI). The LSPI algorithm learns a linear approximation of the optimal state-action value function; the idea studied here is to let this value function depend on a learned estimate of the expected next state instead of directly on the current state and action. This approach makes it easier to define useful basis functions, and hence to learn a useful linear approximation of the value function. Experiments show that the new algorithm, called NSPI for next-state policy iteration, performs well on two standard benchmarks, the well-known mountain car and inverted pendulum swing-up tasks. More importantly, the NSPI algorithm performs well, and better than a specialized recent method, on a resource management task known as the day-ahead wind commitment problem. This latter task has action and state spaces that are high-dimensional and continuous.
更多
查看译文
关键词
value function,LSPI algorithm,NSPI algorithm,current state,expected next state,new algorithm,optimal state-action value function,state space,useful basis function,inverted pendulum swing-up task,policy iteration,transition model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要