Actually, having the possibility for somebody lying should probably be a pretty late-game thing, as it makes your belief network a lot more complicated, and I’m not sure whether this thing should display numerical probabilities at all. Instead of having to juggle the hypotheses of “Alice lied” and “Bob exaggerates things”
Perhaps the protagonist should start out being trained as an investigator on an island where the inhabitants have strong taboos against lying. Early cases are scenarios crafted for your benefit as a trainee. Eventually, you graduate to investigating real cases, and later on, you leave the island.
ETA: Possible alternative to using numerical probabilities. The player’s findings are written in colored boxes, and the color of the box represents how likely the player thinks that individual belief is (ex. Blue=Practical certainty, Green=Very likely, Yellow=Somewhat likely, Orange= Unlikely, Red= Practically falsified.) Early on, players will be taught to make beliefs that are consequences of other beliefs at most as likely as the beliefs they follow from, and to assign lower likelihood to the overlap of multiple conditions, so for instance, if they have a box that follows as a consequence of three yellow boxes, they’ll want to color that box orange.
I’d be happy to contribute writing to the project, if you’re interested in having me.
Further suggestion: Players should learn about the distinction between accuracy and calibration. There should occasionally be scenarios where the real solution is not something the information available to you singles out as probable. Players should learn that banking on an unlikely solution is never a good bet, but highly probable solutions are still only probable rather than certain.
Players’ performance would be tracked, not just in terms of their ability to get the right answers, but their ability to be right about how often they’re right.
I disagree that there should be situations where the less likely situation is correct only becaus it is less likely ( as a pre-programmed result). The likelihood of an event occurring in the game should be a result of your acquired evidence and only 100% certainty can exist when there is enough concrete evidence supporting the outcome. Within the game it should be possible for the true outcome to receive a high probability. Your idea however is essential in situations where the probability of events are very close. For example in a situation with 5 outcomes where all their probabilities are 15-30% it wouldn’t and shouldn’t be obvious.
100% isn’t a probability, and while it’s often feasible to approach it in practice, it’s also often not, because the evidence necessary to reach that degree of confidence simply isn’t available.
If you reach 95% confidence, you should still be wrong 5% of the time.
If the player learns that they can collect all the available evidence and be right 100% of the time in the game, and then finds that they simply can’t do that in real life, they may be disillusioned with the applicability of the general techniques of the game.
Players should learn that banking on an unlikely solution is never a good bet, but highly probable solutions are still only probable rather than certain.
Banking on an unlikely solution is a good bet if and only if you get odds more favorable than the solution is unlikely.
It’s a question both of payoff and of likeliness.
You could even run that as a game mechanic—you have limited investigative time, and certain leads have more promising-looking rewards or more likely to end up with something that helps you out, and then after investigating the lead you update your beliefs and try something else.
A good skill for this is seeing what leads you can cheaply eliminate. It would also put into focus the costs of having too many low-value leads.
I think it might be counterproductive to time players, because it’s likely that time pressure would encourage players to use system 1 reasoning, and develop quick but sloppy heuristics for proceeding in the game.
Possibly the game could introduce some Time Attack elements late on, once the players have mastered everything else, but speaking as someone who very rarely enjoys time constraints in games, I’d prefer if it were optional.
I though that by “quickly”, “with few pieces of evidence” was actually meant—the less, the better. Still, you can always get more evidence than necessary.
Yeah, “quickly” got overloaded by both “how long the player takes to make decisions” and “how many units of in-game time have elapsed”. Perhaps “fewer turns used” is a better way to phrase it.
Yes, I was thinking of something similar as the island: that the early levels would be somewhere where, for whatever reason, you could trust everyone to be truthful. And I was also considering something similar as the boxes!
I’d be happy to have you on the team. :) Please, do join the mailing list that I just set up.
Perhaps the protagonist should start out being trained as an investigator on an island where the inhabitants have strong taboos against lying. Early cases are scenarios crafted for your benefit as a trainee. Eventually, you graduate to investigating real cases, and later on, you leave the island.
ETA: Possible alternative to using numerical probabilities. The player’s findings are written in colored boxes, and the color of the box represents how likely the player thinks that individual belief is (ex. Blue=Practical certainty, Green=Very likely, Yellow=Somewhat likely, Orange= Unlikely, Red= Practically falsified.) Early on, players will be taught to make beliefs that are consequences of other beliefs at most as likely as the beliefs they follow from, and to assign lower likelihood to the overlap of multiple conditions, so for instance, if they have a box that follows as a consequence of three yellow boxes, they’ll want to color that box orange.
I’d be happy to contribute writing to the project, if you’re interested in having me.
Further suggestion: Players should learn about the distinction between accuracy and calibration. There should occasionally be scenarios where the real solution is not something the information available to you singles out as probable. Players should learn that banking on an unlikely solution is never a good bet, but highly probable solutions are still only probable rather than certain.
Players’ performance would be tracked, not just in terms of their ability to get the right answers, but their ability to be right about how often they’re right.
I disagree that there should be situations where the less likely situation is correct only becaus it is less likely ( as a pre-programmed result). The likelihood of an event occurring in the game should be a result of your acquired evidence and only 100% certainty can exist when there is enough concrete evidence supporting the outcome. Within the game it should be possible for the true outcome to receive a high probability. Your idea however is essential in situations where the probability of events are very close. For example in a situation with 5 outcomes where all their probabilities are 15-30% it wouldn’t and shouldn’t be obvious.
100% isn’t a probability, and while it’s often feasible to approach it in practice, it’s also often not, because the evidence necessary to reach that degree of confidence simply isn’t available.
If you reach 95% confidence, you should still be wrong 5% of the time.
If the player learns that they can collect all the available evidence and be right 100% of the time in the game, and then finds that they simply can’t do that in real life, they may be disillusioned with the applicability of the general techniques of the game.
Banking on an unlikely solution is a good bet if and only if you get odds more favorable than the solution is unlikely.
It’s a question both of payoff and of likeliness.
You could even run that as a game mechanic—you have limited investigative time, and certain leads have more promising-looking rewards or more likely to end up with something that helps you out, and then after investigating the lead you update your beliefs and try something else.
A good skill for this is seeing what leads you can cheaply eliminate. It would also put into focus the costs of having too many low-value leads.
I’m concerned that separating reward from effectiveness in getting the right answers would make the game too complicated, and dilute the message.
Err, I was trying to go for a “you are rewarded for getting the right answer quickly” sort of deal.
I think it might be counterproductive to time players, because it’s likely that time pressure would encourage players to use system 1 reasoning, and develop quick but sloppy heuristics for proceeding in the game.
Possibly the game could introduce some Time Attack elements late on, once the players have mastered everything else, but speaking as someone who very rarely enjoys time constraints in games, I’d prefer if it were optional.
I though that by “quickly”, “with few pieces of evidence” was actually meant—the less, the better. Still, you can always get more evidence than necessary.
Yeah, “quickly” got overloaded by both “how long the player takes to make decisions” and “how many units of in-game time have elapsed”. Perhaps “fewer turns used” is a better way to phrase it.
Yes, I was thinking of something similar as the island: that the early levels would be somewhere where, for whatever reason, you could trust everyone to be truthful. And I was also considering something similar as the boxes!
I’d be happy to have you on the team. :) Please, do join the mailing list that I just set up.