Recursive Monte Carlo Search for Bridge Card Play

2020 IEEE Conference on Games (CoG)(2020)

引用 2|浏览7
暂无评分
摘要
Computer Bridge remains a challenging obstacle for Artificial Intelligence. For the last twenty years, the state-of-the-art playing programs have been using a depth-one Monte Carlo (MC) search approach, associated with an open card solver called Double Dummy Solver (DDS). When increasing the computing resources, the MC approach reaches a plateau, and its playing level cannot be improved. In this work, we study Recursive MC (RMC) for Bridge card play. We show that, with more computing resources, this approach performs better than MC. Rather than using DDS or any domain-dependent simulator, a level N + 1 RMC consists in using a level N RMC playing program as simulator, level-zero RMC being MC. This recursion mechanism can be iterated several times at the cost of increasing the computing time, each time a recursion level is added. This work focusses on card play with no trump in duplicate with either 13 cards per player or 5 cards per player. With 13 cards per player, level-one RMC is superior to MC with a margin of 0.5 trick per card distribution on average, which is statistically significant. This is the first time RMC is applied with success to computer Bridge card play.
更多
查看译文
关键词
Monte Carlo search,imperfect information games,game of Bridge,card play
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要