Activating the security cameras does not, in and of itself, prevent further murders. It’s a deterrent, not a shield.
If that’s how you want to play it, I’d recommend having a game mechanic for assaulting another player’s character with one of the murder weapons, which forces them into cryostasis or uploading. Cryostasis is, from an out-of-game perspective, “Screw you guys, this sucks, I’m gonna go do something else.” Uploading means you can continue to play, in a robotic telepresence body, but (due to the inadequately-secured wireless signal) can no longer keep secrets, and possibly have other restrictions.
The security system, having been put on alert, cannot be quickly and non-destructively shut down without a set of codes that Mr. Boddy was clever enough to not keep written down on-site. It is possible, however, to spend a turn’s action functionally disabling the surveillance in a given room, either by taping opaque obstructions over lenses (which can be easily reversed by anyone else in the room) or by destroying the cameras outright. Either is enormously suspicious, there’s a chance you missed one, and you have to be in the room in question to even make an attempt. Alternatively, from the security system’s control room, it’s possible to recalibrate a given room’s cameras into uselessness: pivot them to face walls, turn up the gain to record only whiteout, etc. This is always impermanent, but makes it safer to break the cameras. From the control room, it’s possible to review recordings (look at the notes other players have secretly exchanged) but not destroy them, since there’s an off-site backup.
It would make sense for at least one person to sincerely remember being the murderer. That’s strong evidence, but far from perfect. If a person who remembers actually is the murderer, their memories of how it happened and where are also useful. Everybody knows what they remember, nobody knows The Truth… until it’s over.
I think the tricky Bayesian-specific part would be the probability estimates. What about giving everyone chips, like for roulette? Start with a pile, expend them on certain in-game actions. When the game ends, if there’s a spot that turns out to be true but you didn’t put any chips on, you lose outright, but the highest possible score is to have exactly one chip on each correct answer and none on any others, regardless of how many you spent, representing peoples’ willingness to put up with annoying behavior from someone who turns out to be an oracle.
What about giving everyone chips, like for roulette? Start with a pile, expend them on certain in-game actions. When the game ends, if there’s a spot that turns out to be true but you didn’t put any chips on, you lose outright, but the highest possible score is to have exactly one chip on each correct answer and none on any others\
About three questions worth of W&W for the endgame, yeah. One big difference, though: failure to bet on a winner in a given round normally just means you win nothing, and lose as much of your stake as you bet that round. In Bayesian probability, assigning probability zero means there’s no going back, so it’s important not to do that unless you’re unreasonably sure. Of course, the goal of the game is to be rational, and rationalists should win, so it’s good to have an ultimate victory condition that someone who’s blatantly irrational can achieve occasionally by dumb luck, to keep the ones who are as skilled at the game as is reasonably possible craving opportunities to improve further.
I like the idea a lot. I’m not nearly as crazy about your analysis, but, then your analysis is maybe 100 x more complicated than the idea itself in terms of Kolgormoioff-who’s-his-face-complexity, so that’s not too too surprising.
I think if we’re going to apply strict Bayesian religious payoffs, we’ll need to give each player more chips to drive the point home. With six chips and three choices, e.g., it’s trivial to learn to bid 3:2:1 or 4:1:1 (the only combinations that don’t leave a zero anywhere), depending on whether you’re “sure” or not that your #1 pick is correct. It’s also suboptimal: if you’re only going to play, say, 3 or 4 games with the same group of people, and each game has 3 rounds, and you are rationally 95% confident that your #1 pick is correct with 3.5% in your #2 pick and 1.5% in your #3 pick, then you could bid 5:1:0 and expect to beat all your friends until they got bored with the game. It teaches the wrong lesson, maybe. Life offers more iterations than one-off Clue.
With six weapons and six characters and, say, 40 chips, there is still a temptation to play zero chips on some weapons, but the dangers of this strategy are likely to become vividly apparent in only a few games...because you don’t need to leave a tile open in order to win (you can win by outguessing others with your distribution, maybe putting 15 chips on a weapon that you are quite sure of, and only 5 on the character you are most sure of, because you are well-calibrated and know what you know), the downsides of leaving a zero open are fairly apparent. Your final score could be the chips you bid on the winning weapon times the chips you bid on the winning character.
Activating the security cameras does not, in and of itself, prevent further murders. It’s a deterrent, not a shield.
If that’s how you want to play it, I’d recommend having a game mechanic for assaulting another player’s character with one of the murder weapons, which forces them into cryostasis or uploading. Cryostasis is, from an out-of-game perspective, “Screw you guys, this sucks, I’m gonna go do something else.” Uploading means you can continue to play, in a robotic telepresence body, but (due to the inadequately-secured wireless signal) can no longer keep secrets, and possibly have other restrictions.
The security system, having been put on alert, cannot be quickly and non-destructively shut down without a set of codes that Mr. Boddy was clever enough to not keep written down on-site. It is possible, however, to spend a turn’s action functionally disabling the surveillance in a given room, either by taping opaque obstructions over lenses (which can be easily reversed by anyone else in the room) or by destroying the cameras outright. Either is enormously suspicious, there’s a chance you missed one, and you have to be in the room in question to even make an attempt. Alternatively, from the security system’s control room, it’s possible to recalibrate a given room’s cameras into uselessness: pivot them to face walls, turn up the gain to record only whiteout, etc. This is always impermanent, but makes it safer to break the cameras. From the control room, it’s possible to review recordings (look at the notes other players have secretly exchanged) but not destroy them, since there’s an off-site backup.
It would make sense for at least one person to sincerely remember being the murderer. That’s strong evidence, but far from perfect. If a person who remembers actually is the murderer, their memories of how it happened and where are also useful. Everybody knows what they remember, nobody knows The Truth… until it’s over.
It’s the same basic genre as Mafia or Diplomacy. Might as well admit it and learn from what came before.
I think the tricky Bayesian-specific part would be the probability estimates. What about giving everyone chips, like for roulette? Start with a pile, expend them on certain in-game actions. When the game ends, if there’s a spot that turns out to be true but you didn’t put any chips on, you lose outright, but the highest possible score is to have exactly one chip on each correct answer and none on any others, regardless of how many you spent, representing peoples’ willingness to put up with annoying behavior from someone who turns out to be an oracle.
You mean, like Wits & Wagers?
About three questions worth of W&W for the endgame, yeah. One big difference, though: failure to bet on a winner in a given round normally just means you win nothing, and lose as much of your stake as you bet that round. In Bayesian probability, assigning probability zero means there’s no going back, so it’s important not to do that unless you’re unreasonably sure. Of course, the goal of the game is to be rational, and rationalists should win, so it’s good to have an ultimate victory condition that someone who’s blatantly irrational can achieve occasionally by dumb luck, to keep the ones who are as skilled at the game as is reasonably possible craving opportunities to improve further.
I like the idea a lot. I’m not nearly as crazy about your analysis, but, then your analysis is maybe 100 x more complicated than the idea itself in terms of Kolgormoioff-who’s-his-face-complexity, so that’s not too too surprising.
I think if we’re going to apply strict Bayesian religious payoffs, we’ll need to give each player more chips to drive the point home. With six chips and three choices, e.g., it’s trivial to learn to bid 3:2:1 or 4:1:1 (the only combinations that don’t leave a zero anywhere), depending on whether you’re “sure” or not that your #1 pick is correct. It’s also suboptimal: if you’re only going to play, say, 3 or 4 games with the same group of people, and each game has 3 rounds, and you are rationally 95% confident that your #1 pick is correct with 3.5% in your #2 pick and 1.5% in your #3 pick, then you could bid 5:1:0 and expect to beat all your friends until they got bored with the game. It teaches the wrong lesson, maybe. Life offers more iterations than one-off Clue.
With six weapons and six characters and, say, 40 chips, there is still a temptation to play zero chips on some weapons, but the dangers of this strategy are likely to become vividly apparent in only a few games...because you don’t need to leave a tile open in order to win (you can win by outguessing others with your distribution, maybe putting 15 chips on a weapon that you are quite sure of, and only 5 on the character you are most sure of, because you are well-calibrated and know what you know), the downsides of leaving a zero open are fairly apparent. Your final score could be the chips you bid on the winning weapon times the chips you bid on the winning character.