Suppose most people think there’s a shrew in the basement, and Richard Feynman thinks there’s a beaver. If you’re pretty sure it’s not a shrew, two possible reactions include:
- ‘Ah, the truth is probably somewhere in between these competing perspectives. So maybe it’s an intermediate-sized rodent, like a squirrel.’
- ‘Ah, Feynman has an absurdly good epistemic track record, and early data does indicate that the animal’s probably bigger than a shrew. So I’ll go with his guess and say it’s probably a beaver.’
But a third possible response is:
- ‘Ah, if Feynman’s right, then a lot of people are massively underestimating the rodent’s size. Feynman is a person too, and might be making the same error (just to a lesser degree); so my modal guess will be that it’s something bigger than a beaver, like a capybara.’
In particular, you may want to go more extreme than Feynman if you think there’s something systematically causing people to underestimate a quantity (e.g., a cognitive bias—the person who speaks out first against a bias might still be affected by it, just to a lesser degree), or systematically causing people to make weaker claims than they really believe (e.g., maybe people don’t want to sound extreme or out-of-step with the mainstream view).
In particular, you may want to go more extreme than Feynman if you think there’s something systematically causing people to underestimate a quantity (e.g., a cognitive bias—the person who speaks out first against a bias might still be affected by it, just to a lesser degree), or systematically causing people to make weaker claims than they really believe (e.g., maybe people don’t want to sound extreme or out-of-step with the mainstream view).
This is true! But I think it’s important to acknowledge that this depends a lot on details of Feynman’s reasoning process, and it doesn’t go in a consistent direction. If Feynman is aware of the bias, he may have already compensated for it in his own estimate, so compensating on his behalf would be double-counting the adjustment. And sometimes the net incentive is to overestimate, not to underestimate, because you’re trying to sway the opinion of averagers, or because being more contrarian gets attention, or because shrew-thinkers feel like an outgroup.
In the end, you can’t escape from detail. But if you were to put full power into making this heuristic work, the way to do it would be to look at past cases of Feynman-vs-world disagreement (broadening the “Feynman” and “world” categories until there’s enough training data), and try to get a distribution empirically.
Have you seen this ever work for an advance prediction? It seems like you need to be in a better epistemic position than Feynman, which is pretty hard.
Suppose most people think there’s a shrew in the basement, and Richard Feynman thinks there’s a beaver. If you’re pretty sure it’s not a shrew, two possible reactions include:
- ‘Ah, the truth is probably somewhere in between these competing perspectives. So maybe it’s an intermediate-sized rodent, like a squirrel.’
- ‘Ah, Feynman has an absurdly good epistemic track record, and early data does indicate that the animal’s probably bigger than a shrew. So I’ll go with his guess and say it’s probably a beaver.’
But a third possible response is:
- ‘Ah, if Feynman’s right, then a lot of people are massively underestimating the rodent’s size. Feynman is a person too, and might be making the same error (just to a lesser degree); so my modal guess will be that it’s something bigger than a beaver, like a capybara.’
In particular, you may want to go more extreme than Feynman if you think there’s something systematically causing people to underestimate a quantity (e.g., a cognitive bias—the person who speaks out first against a bias might still be affected by it, just to a lesser degree), or systematically causing people to make weaker claims than they really believe (e.g., maybe people don’t want to sound extreme or out-of-step with the mainstream view).
This is true! But I think it’s important to acknowledge that this depends a lot on details of Feynman’s reasoning process, and it doesn’t go in a consistent direction. If Feynman is aware of the bias, he may have already compensated for it in his own estimate, so compensating on his behalf would be double-counting the adjustment. And sometimes the net incentive is to overestimate, not to underestimate, because you’re trying to sway the opinion of averagers, or because being more contrarian gets attention, or because shrew-thinkers feel like an outgroup.
In the end, you can’t escape from detail. But if you were to put full power into making this heuristic work, the way to do it would be to look at past cases of Feynman-vs-world disagreement (broadening the “Feynman” and “world” categories until there’s enough training data), and try to get a distribution empirically.
Endorsed!
Have you seen this ever work for an advance prediction? It seems like you need to be in a better epistemic position than Feynman, which is pretty hard.