Bayesianism and use of Evidence in Social Deduction Games
You look around the table at four friends—people who share your hatred for the evil empire, or so you thought. At this table, where the resistance meet to plan their missions, fully two of five the operatives are spies, infiltrating the rebels to sabotage their missions. You’ve seen your loyalty card, so you know you’re resistance… but how do you figure out which of your so-called allies are the spies?
The Resistance, like Werewolf, Mafia, Battlestar Galactica, and other social deduction games, tasks the majority of players with rooting out the spies in their midst—while the spies win by staying hidden. Among my friends, accusations of spyhood tend to be absolute: “Did you see how long he hesitated? He must be a spy!” Whether the suspicion is based on social cues or in-game actions, players rapidly become very sure of those beliefs they discuss at the table. They seem to divide their observations into two neat boxes, based on whether the data can decisively show someone’s identity. If evidence seems convincing, it becomes concrete proof, immune to discussion; and if it doesn’t, then it’s disregarded.
This treatment of evidence can lead to overconfidence: once when I was well-framed by the spies, my fellow resistance member refused to even imagine how I could be innocent. And why should he listen to me? He had evidence that I was a spy. On the other hand, it can just as easily lead to under-confidence: when new players see that there is no conclusive proof one way or the other, they often disregard the hints and suggestive evidence (in someone’s tone of voice, or their eagerness to go on a mission), and throw their hands up at the supposed randomness of the game.
Using Bayesianism as an alternative to this dichotomy allows me to treat evidence with the appropriate scrutiny, rather than using narrative ideas to guide my play. A two-person mission succeeds; the next mission adds a player to that team, and it fails. According to story logic, the first two players are trustworthy, so the third must have sabotaged the new mission. For more experienced players, the first mission is treated as having no informational value: spies may lay low, so any of the three players could be the saboteur, and it’s a 1⁄3 shot. According to Bayesianism, P(player 3 is a spy) is influenced by all available evidence, given proper weighting. How likely is it for a spy to lay low on the first mission? Who chose for player 3 to join the mission? What is player 3′s strategy as a spy? I find that this approach, of investigating all available evidence and updating my suspicions accordingly, allows me to have better precision in my accusations, and hopefully leads my teammates to start valuing evidence in the gradient way that these games, and investigation in life in general, requires for success.
I post this not only because I love playing Resistance (obviously!), but also because I think this game could be a fun and useful exercise in Bayesian reasoning, for the same reasons that Paranoid Debating may be: the group’s appraisal of the evidence needs to be accurate for the resistance to win, while it must be inaccurate for the spies to win. This encourages proper Bayesian technique among the resistance, and clever, bias-abusing rhetoric from the spies to twist the game in their favor.
If anyone would like to use this game at a LessWrong meetup, or as an activity run by the Center for Modern Rationality, all you need are the rules (here and here), a deck of playing cards, and the power of Bayes!
(Special thanks to Julia Galef, for thinking the game sounded like a fun idea for teaching Bayesianism)
- 7 Apr 2012 22:54 UTC; 0 points) 's comment on Meetup : Twin Cities, MN (for real this time) by (
We play The Resistance often in Madison. I love it. It’s a great way to practice explicit Bayesian reasoning: “How likely is it that he would do what he just did if he was a spy? If he was a resistance member?” I recommend any group here trying it at least once or twice; it can be fantastic, but depends strongly on people’s attitude towards the game.
Timer variation.
If your games run long, then start a 3-to-5-minute timer at the beginning of each round. Once the timer stops, discussion must stop; the leader must name a team, and players must vote for it, without any further “help” from other players.
Obviously keeping the game short makes it much less time-consuming—our group can play one untimed game in 2 hours, or three timed games. The long game is more intense, but the short games fit better into any given get-together.
More importantly, 40-minute timed games are short enough that resistance members can learn something about the validity of their thought processes. In a longer game, spotting your mistakes after learning people’s true identities is much harder.
Spies, on the other hand, can learn lots of useful technique during a long game, because they know the state of the game, and get strong feedback when suspected or directing suspicion. As such, when we played for a long time without timers, we became much better as spies than as resistance members, and the spies won almost every game. This trend seems to be changing now that we play timed games, but I’ll defer judgment until we’ve played this way at least a dozen more times.
Other variations.
Randomize the order of the rounds, so that they aren’t always shortest first. (This should usually help the resistance, though not always.)
Flip the played cards from each mission one at a time, and stop when you’ve flipped them all, or you’ve flipped enough red cards to sabotage. (This can help the spies.)
Don’t try these until your group has a pretty good idea what’s going on, and you have some reason to suspect that they’d change the stable strategy. I’m not going to describe exactly what these changes are for; that would spoil some of the fun. :)
I think the main reason for this is that without a timer, the spies have every opportunity for motivated continuation, and thus the group never stops on a good team unless every spy has been identified.
The best thing Resistance has taught me is just how useless my certainty is.
Although it is better to use Bayesian probabilities to keep track of who you think is a spy, in a normal game it is advantageous to appear sure that you know who the spy is. By appearing sure, you are more likely to convince other people, which means you get to stay in the game longer and you’re more likely to catch the spy, assuming you’re part of the resistance.
Seeming sure of who the spies are is a strong strategy—but it’s equally strong whether you’re resistance or a spy. Accurate Bayesian reasoning is only a strong strategy for the resistance, since spies don’t want the truth to come out. Spies want to lie either way, but it’s much easier to lie in a finger-pointing contest than in a discussion of evidence and probability updating.
Using explicit Bayesian reasoning is less likely to lead your teammates into bad judgments of the kind I touched on (stemming from over/under-confidence), and it gives your teammates evidence that you are resistance.
Not when you’re playing with other LW readers: in our group, acting certain without communicable evidence is a reasonably clear “tell”.
Yeah, I wasn’t talking about games with LWers, I was talking about games with the average person. That’s what I meant by “normal game,” but it seems that I should have been more explicit.
Edit: I also haven’t played resistance before, and assumed it was similar to mafia. But, this game is much more complicated, and seeming certain isn’t as useful here as it is in those games.
I’ve been a long time Resistance/Werewolf player and have never thought about playing the game like this; I suppose I hadn’t thought about the academic possibilities within social deduction games.
This is a fantastic piece, I’ll definitely start practising my Bayesian technique in future games—I’ve just added your article to my blog on educational board games.
I love Resistance!