I am extremely interested in this, and all similar efforts in this space. I agree our community should be doing much more along these lines.
Regarding your specific ideas:
Cognitive Bias Detection
Something about training people to categorize errors—instead of just making good decisions—rubs me the wrong way. Also, there’s a lot of pre-existing work (I found out about this earlier today).
Calibration Training
The Credence Calibration Game exists. So does my variation on the same idea (see also the associated lesson plan). So do play-money and real-money prediction markets. That said, I do think there’s a valuable and unfilled niche for something which doesn’t require a download and has a nice user interface and has a four-digit number of questions and lets you check your answers immediately (. . . though I don’t know how many people other than me would consider it valuable).
Bite-Sized, Practical Challenges
I am very much in favor of this, to the point where I’m already (tentatively) planning to (eventually) build some games with a similar motivation. Relatedly, the “ask users to predict an outcome based on limited data” example sounds like a description of that genre I invented (though “Bite-Sized” suggests you’re thinking in terms of something much more polished/generally-accessible).
(Side note: A subtle benefit of the “Practical Challenges” approach is that it can correct for biases you weren’t aiming for. A large part of my motivation for making D&D.Sci was “forcing them to confront the common pitfalls of overconfidence or representativeness heuristics”; I found that a Lesswronger working in a Data Science context will more often be insufficiently confident, and place too little weight on surface appearances; my endeavor ‘failed’ gracefully and people got a chance to notice those errors instead (plus various other problems I didn’t even consider).)
-
I look forward to seeing what comes of this. If you want anything playtested, please let me know.
Something about training people to categorize errors—instead of just making good decisions—rubs me the wrong way
Are you able to pinpoint exactly what gives you this feeling? The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes. I believe it’s generally easier to evaluate the truthfulness of a statement than to come up with one initially, so this training would help make the “biased thought detector” more accurate.
Relatedly, the “ask users to predict an outcome based on limited data” example sounds like a description of that genre I invented (though “Bite-Sized” suggests you’re thinking in terms of something much more polished/generally-accessible).
That’s really cool! I definitely see the value in multi-step case study problems, as they would require more complex reasoning than smaller bite-sized problems might. Themed problems could make the process much more engaging as I think this kind of training can get a bit dull with overly generic examples. Combining the depth of case studies with the accessibility of simpler exercises might strike a nice balance.
I look forward to seeing what comes of this. If you want anything playtested, please let me know.
Definitely will take you up on this! I’m working on the prototype and should have something simple in the next few weeks. I’m considering starting a sequence to document the progress to get more visibility, interest, and immediate feedback.
Are you able to pinpoint exactly what gives you this feeling?
Less a single sharp pinpoint, more a death of a thousand six cuts:
The emphasis on learning the names of biases is kinda guessing-the-teacher’s-password-y.
You’d need to put forth an unusual effort to make sure you’re communicating the subset of psychological research which actually replicates reliably.
Any given bias might not be present in the student or their social/business circle.
The suggested approach implies that the set of joints psychologists currently carve at is the ‘best’ one; what if I happen to see Bias A and Bias B as manifestations of Bias C?
I worry some students would round this off to “here’s how to pathologize people who disagree with me!” training.
Like I said, this is the kind of fruit that’s low-hanging enough that it’s mostly already picked.
All that said, I still think this is potentially worthwhile and would still playtest it if you wanted. But I’m much more excited about literally every other idea you mentioned.
The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes.
Part of what rationality is about is that you don’t just hope for beneficial things to happen.
Cognitive bias is a term that comes out of the psychology literature and there were plenty of studies in the domain. It’s my understanding that in academia nobody found that you get very far by teaching people to recognize biases.
Outside of academia, we have CFAR that did think about whether you can get people to be more rational by giving them exercises and came to the conclusion that those exercises should be different.
In a case like this, asking yourself “What evidence do I have that what I hope will actually happen?” and “What sources, be it academic people or experts I might interview, could give me more evidence?” would be much more productive questions than “What things in my thought process might be labeled as biases?”
I am extremely interested in this, and all similar efforts in this space. I agree our community should be doing much more along these lines.
Regarding your specific ideas:
Something about training people to categorize errors—instead of just making good decisions—rubs me the wrong way. Also, there’s a lot of pre-existing work (I found out about this earlier today).
The Credence Calibration Game exists. So does my variation on the same idea (see also the associated lesson plan). So do play-money and real-money prediction markets. That said, I do think there’s a valuable and unfilled niche for something which doesn’t require a download and has a nice user interface and has a four-digit number of questions and lets you check your answers immediately (. . . though I don’t know how many people other than me would consider it valuable).
I am very much in favor of this, to the point where I’m already (tentatively) planning to (eventually) build some games with a similar motivation. Relatedly, the “ask users to predict an outcome based on limited data” example sounds like a description of that genre I invented (though “Bite-Sized” suggests you’re thinking in terms of something much more polished/generally-accessible).
(Side note: A subtle benefit of the “Practical Challenges” approach is that it can correct for biases you weren’t aiming for. A large part of my motivation for making D&D.Sci was “forcing them to confront the common pitfalls of overconfidence or representativeness heuristics”; I found that a Lesswronger working in a Data Science context will more often be insufficiently confident, and place too little weight on surface appearances; my endeavor ‘failed’ gracefully and people got a chance to notice those errors instead (plus various other problems I didn’t even consider).)
-
I look forward to seeing what comes of this. If you want anything playtested, please let me know.
I appreciate the reply!
Are you able to pinpoint exactly what gives you this feeling? The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes. I believe it’s generally easier to evaluate the truthfulness of a statement than to come up with one initially, so this training would help make the “biased thought detector” more accurate.
That’s really cool! I definitely see the value in multi-step case study problems, as they would require more complex reasoning than smaller bite-sized problems might. Themed problems could make the process much more engaging as I think this kind of training can get a bit dull with overly generic examples. Combining the depth of case studies with the accessibility of simpler exercises might strike a nice balance.
Definitely will take you up on this! I’m working on the prototype and should have something simple in the next few weeks. I’m considering starting a sequence to document the progress to get more visibility, interest, and immediate feedback.
Less a single sharp pinpoint, more a death of
a thousandsix cuts:The emphasis on learning the names of biases is kinda guessing-the-teacher’s-password-y.
You’d need to put forth an unusual effort to make sure you’re communicating the subset of psychological research which actually replicates reliably.
Any given bias might not be present in the student or their social/business circle.
The suggested approach implies that the set of joints psychologists currently carve at is the ‘best’ one; what if I happen to see Bias A and Bias B as manifestations of Bias C?
I worry some students would round this off to “here’s how to pathologize people who disagree with me!” training.
Like I said, this is the kind of fruit that’s low-hanging enough that it’s mostly already picked.
All that said, I still think this is potentially worthwhile and would still playtest it if you wanted. But I’m much more excited about literally every other idea you mentioned.
Part of what rationality is about is that you don’t just hope for beneficial things to happen.
Cognitive bias is a term that comes out of the psychology literature and there were plenty of studies in the domain. It’s my understanding that in academia nobody found that you get very far by teaching people to recognize biases.
Outside of academia, we have CFAR that did think about whether you can get people to be more rational by giving them exercises and came to the conclusion that those exercises should be different.
In a case like this, asking yourself “What evidence do I have that what I hope will actually happen?” and “What sources, be it academic people or experts I might interview, could give me more evidence?” would be much more productive questions than “What things in my thought process might be labeled as biases?”