Self-Play Monte-Carlo Tree Search in Computer Poker

national conference on artificial intelligence(2014)

引用 28|浏览48
暂无评分
摘要
© Copyright 2014. Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Self-play reinforcement learning has proved to be succ essful in many perfect information two-player games. However, research carrying over its theoretical guaranI ces and practical success to games of imperfect inform ation has been lacking. In this paper, we evaluate self- play Monte-Carlo Tree Search (MCTS) in limit Texas Hold'em and Kuhn poker. We introduce a variant of the established UCB algorithm and provide first empinc al results demonstrating its ability to find approximate Nash equilibria.
更多
查看译文
关键词
imperfect information,monte carlo tree search,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要