I think the most straightforward “edutainment” design would be a “rube or blegg” model of presenting conflicting evidence and then revealing the Word of God objective truth at the end of the game—different biases can be targetted with different forms of evidence, different models of interpretation (e.g. whether or not players can assign confidence levels in their guesses), and different scoring methods (e.g. whether the game is iterative, whether it’s many one shots but probability of success over many games is the goal, etc.).
A more compelling example that won’t turn off as many people (ew, edutainment? bo-ring) would probably be a multiplayer game in which the players are randomly led to believe incompatible conclusions and then interact. Availability of public information and the importance of having been right all along or committing strongly to a position early could be calibrated to target specific biases and fallacies.
As someone with aspirations to game design, this is a particularly interesting concept. One great aspect of video game culture is that most multiplayer games are one-offs from a social perspective: There’s no social penalty for denigrating an ally’s ability since you will never see them again, and there’s no gameplay penalty for being wrong. This means that insofar as any and all facets in the course of a game where trusting an ally is not necessary, one can greatly underestimate the ally’s skill FOREVER without ever being critically wrong. This makes online gaming perhaps the most fertile incubator of socially negative confirmation bias anywhere ever. If an ally is judged poorly, there’s no penalty for declaring them as poor prematurely, and in fact people seem to apply profound confirmation bias on all evidence for the remainder of the game.
Could a game effectively be designed to target this confirmation bias and give the online gaming community a more constructive and realistic picture? I’ll definitely be mulling this over. Great post.
If I understand your ‘problem’ correctly—estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it’s not nearly a game-specific concept—it applies to any partner-selection without perfect information, like mating or in job interviews.
As long as there is a large enough pool of potential parners, and you don’t need all of the ‘good’ ones, then false negatives don’t really matter as much as the speed or ease of the selection process and the cost of false positives, where you trust someone and he turns out to be poor after all.
There’s no major penalty for being picky and denigrating a potential mate (or hundreds of them), especially for females, as long as you get a decent one in the end; In such situations the optimal evaluation criteria seem to be ‘better punish a hundred innocents than let one bad guy/loser past the filter’, the exact opposite of what most justice systems try to achieve.
There’s no major penalty for, say, throwing out a random half of CV’s you get for a job vacancy if you get too many responses—if you get a 98% ‘fit’ candidate up to final in-person interviews, then it doesn’t matter that much if you lose a 99% candidate that you didn’t consider at all—the cost of interviewing an extra dozen of losers would be greater than the benefit.
The same situation happens also in MMOG’s, and unsurprisingly people tend to find the same reasonable solutions as in real life.
I think the most straightforward “edutainment” design would be a “rube or blegg” model of presenting conflicting evidence and then revealing the Word of God objective truth at the end of the game—different biases can be targetted with different forms of evidence, different models of interpretation (e.g. whether or not players can assign confidence levels in their guesses), and different scoring methods (e.g. whether the game is iterative, whether it’s many one shots but probability of success over many games is the goal, etc.).
A more compelling example that won’t turn off as many people (ew, edutainment? bo-ring) would probably be a multiplayer game in which the players are randomly led to believe incompatible conclusions and then interact. Availability of public information and the importance of having been right all along or committing strongly to a position early could be calibrated to target specific biases and fallacies.
As someone with aspirations to game design, this is a particularly interesting concept. One great aspect of video game culture is that most multiplayer games are one-offs from a social perspective: There’s no social penalty for denigrating an ally’s ability since you will never see them again, and there’s no gameplay penalty for being wrong. This means that insofar as any and all facets in the course of a game where trusting an ally is not necessary, one can greatly underestimate the ally’s skill FOREVER without ever being critically wrong. This makes online gaming perhaps the most fertile incubator of socially negative confirmation bias anywhere ever. If an ally is judged poorly, there’s no penalty for declaring them as poor prematurely, and in fact people seem to apply profound confirmation bias on all evidence for the remainder of the game.
Could a game effectively be designed to target this confirmation bias and give the online gaming community a more constructive and realistic picture? I’ll definitely be mulling this over. Great post.
If I understand your ‘problem’ correctly—estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it’s not nearly a game-specific concept—it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don’t need all of the ‘good’ ones, then false negatives don’t really matter as much as the speed or ease of the selection process and the cost of false positives, where you trust someone and he turns out to be poor after all.
There’s no major penalty for being picky and denigrating a potential mate (or hundreds of them), especially for females, as long as you get a decent one in the end; In such situations the optimal evaluation criteria seem to be ‘better punish a hundred innocents than let one bad guy/loser past the filter’, the exact opposite of what most justice systems try to achieve.
There’s no major penalty for, say, throwing out a random half of CV’s you get for a job vacancy if you get too many responses—if you get a 98% ‘fit’ candidate up to final in-person interviews, then it doesn’t matter that much if you lose a 99% candidate that you didn’t consider at all—the cost of interviewing an extra dozen of losers would be greater than the benefit.
The same situation happens also in MMOG’s, and unsurprisingly people tend to find the same reasonable solutions as in real life.