Scalable And Efficient Bayes-Adaptive Reinforcement Learning Based On Monte-Carlo Tree Search

Journal of Artificial Intelligence Research(2013)

引用 32|浏览66
暂无评分
摘要
Bayesian planning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, planning optimally in the face of uncertainty is notoriously taxing, since the search space is enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach avoids expensive applications of Bayes rule within the search tree by sampling models from current beliefs, and furthermore performs this sampling in a lazy manner. This enables it to outperform previous Bayesian model-based reinforcement learning algorithms by a significant margin on several well-known benchmark problems. As we show, our approach can even work in problems with an infinite state space that lie qualitatively out of reach of almost all previous work in Bayesian exploration.
更多
查看译文
关键词
Bayesian exploration,Bayesian planning,Monte-Carlo tree search,approximate Bayes-optimal planning,elegant approach,previous Bayesian model-based reinforcement,search space,search tree,finite state space,model uncertainty,efficient bayes-adaptive reinforcement,monte-carlo tree search
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要