Something about training people to categorize errors—instead of just making good decisions—rubs me the wrong way
Are you able to pinpoint exactly what gives you this feeling? The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes. I believe it’s generally easier to evaluate the truthfulness of a statement than to come up with one initially, so this training would help make the “biased thought detector” more accurate.
Relatedly, the “ask users to predict an outcome based on limited data” example sounds like a description of that genre I invented (though “Bite-Sized” suggests you’re thinking in terms of something much more polished/generally-accessible).
That’s really cool! I definitely see the value in multi-step case study problems, as they would require more complex reasoning than smaller bite-sized problems might. Themed problems could make the process much more engaging as I think this kind of training can get a bit dull with overly generic examples. Combining the depth of case studies with the accessibility of simpler exercises might strike a nice balance.
I look forward to seeing what comes of this. If you want anything playtested, please let me know.
Definitely will take you up on this! I’m working on the prototype and should have something simple in the next few weeks. I’m considering starting a sequence to document the progress to get more visibility, interest, and immediate feedback.
Are you able to pinpoint exactly what gives you this feeling?
Less a single sharp pinpoint, more a death of a thousand six cuts:
The emphasis on learning the names of biases is kinda guessing-the-teacher’s-password-y.
You’d need to put forth an unusual effort to make sure you’re communicating the subset of psychological research which actually replicates reliably.
Any given bias might not be present in the student or their social/business circle.
The suggested approach implies that the set of joints psychologists currently carve at is the ‘best’ one; what if I happen to see Bias A and Bias B as manifestations of Bias C?
I worry some students would round this off to “here’s how to pathologize people who disagree with me!” training.
Like I said, this is the kind of fruit that’s low-hanging enough that it’s mostly already picked.
All that said, I still think this is potentially worthwhile and would still playtest it if you wanted. But I’m much more excited about literally every other idea you mentioned.
The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes.
Part of what rationality is about is that you don’t just hope for beneficial things to happen.
Cognitive bias is a term that comes out of the psychology literature and there were plenty of studies in the domain. It’s my understanding that in academia nobody found that you get very far by teaching people to recognize biases.
Outside of academia, we have CFAR that did think about whether you can get people to be more rational by giving them exercises and came to the conclusion that those exercises should be different.
In a case like this, asking yourself “What evidence do I have that what I hope will actually happen?” and “What sources, be it academic people or experts I might interview, could give me more evidence?” would be much more productive questions than “What things in my thought process might be labeled as biases?”
I appreciate the reply!
Are you able to pinpoint exactly what gives you this feeling? The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes. I believe it’s generally easier to evaluate the truthfulness of a statement than to come up with one initially, so this training would help make the “biased thought detector” more accurate.
That’s really cool! I definitely see the value in multi-step case study problems, as they would require more complex reasoning than smaller bite-sized problems might. Themed problems could make the process much more engaging as I think this kind of training can get a bit dull with overly generic examples. Combining the depth of case studies with the accessibility of simpler exercises might strike a nice balance.
Definitely will take you up on this! I’m working on the prototype and should have something simple in the next few weeks. I’m considering starting a sequence to document the progress to get more visibility, interest, and immediate feedback.
Less a single sharp pinpoint, more a death of
a thousandsix cuts:The emphasis on learning the names of biases is kinda guessing-the-teacher’s-password-y.
You’d need to put forth an unusual effort to make sure you’re communicating the subset of psychological research which actually replicates reliably.
Any given bias might not be present in the student or their social/business circle.
The suggested approach implies that the set of joints psychologists currently carve at is the ‘best’ one; what if I happen to see Bias A and Bias B as manifestations of Bias C?
I worry some students would round this off to “here’s how to pathologize people who disagree with me!” training.
Like I said, this is the kind of fruit that’s low-hanging enough that it’s mostly already picked.
All that said, I still think this is potentially worthwhile and would still playtest it if you wanted. But I’m much more excited about literally every other idea you mentioned.
Part of what rationality is about is that you don’t just hope for beneficial things to happen.
Cognitive bias is a term that comes out of the psychology literature and there were plenty of studies in the domain. It’s my understanding that in academia nobody found that you get very far by teaching people to recognize biases.
Outside of academia, we have CFAR that did think about whether you can get people to be more rational by giving them exercises and came to the conclusion that those exercises should be different.
In a case like this, asking yourself “What evidence do I have that what I hope will actually happen?” and “What sources, be it academic people or experts I might interview, could give me more evidence?” would be much more productive questions than “What things in my thought process might be labeled as biases?”