An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward

AISTATS(2009)

引用 43|浏览56
暂无评分
摘要
We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterized in terms of a flexible mixture of Gaussians. This ap- proach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and gen- eral than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more ac- curate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computa- tion, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.
更多
查看译文
关键词
linear quadratic gaussian,analytic solution,closed form solution,expectation maximization algorithm,markov decision process,approximation error,mixture of gaussians
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要