Solving Coup as an MDP / POMDP

Semir Shafi, David Lee-Heidenreich

semanticscholar(2018)

Cited 0|Views5
No score
Abstract
We modeled the card game Coup as a Markov Decision Process and attempted to solve it using various methods learned in CS238. Due to our large state space, we focused on online methods. Since Coup is a multi-agent game we generated optimal policies against players with specific strategies. We first modeled the game as an MDP where we knew everything about the game state and developed policies against a player doing random actions. We used forward search, sparse sampling, and monte carlo tree search. We then modeled the game as a POMDP with state uncertainty where we did not know our opponents cards. We implemented Monte Carlo Tree Search, sparse sampling and forward search with both incomplete and complete information. Finally, to try and beat our Monte Carlo Tree Search player, we implemented Forward Search with Discrete State Filtering for updating our belief.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined