A major problem with these approaches is that for the majority of real-life questions, the circuits in your brain that are best capable of analyzing the situation and giving you an answer along with a vague feeling of certainty are altogether different from those that you can use to run these heuristics. This is why, in my opinion, attempts to assign numerical probabilities to common-sense judgments usually don’t make sense.
If your brain has the ability to make a common-sense judgment about some real-world phenomenon, this ability will typically be implemented in the form of a black-box module that will output the answer along with some coarsely graded intuitive feeling of certainty. You cannot open this black box and analyze its algorithms in order to upgrade this vague feeling into a precise numerical probability estimate. If you instead use heuristics that yield numerical probabilities, such as finding reference classes, this means side-stepping your black-box module and using an altogether different algorithm instead—and the probability estimate you’ll arrive at this way won’t be pertinent to your best analysis that uses the black-box intuition module.
Surely you can teach yourself to compare intuitive certainty to probabilities, though. I mean, if you come up with rough labels for levels of intuitive certainty, and record how often each label is right or wrong, you’d get a really rough corresponding probability already.
A major problem with these approaches is that for the majority of real-life questions, the circuits in your brain that are best capable of analyzing the situation and giving you an answer along with a vague feeling of certainty are altogether different from those that you can use to run these heuristics. This is why, in my opinion, attempts to assign numerical probabilities to common-sense judgments usually don’t make sense.
If your brain has the ability to make a common-sense judgment about some real-world phenomenon, this ability will typically be implemented in the form of a black-box module that will output the answer along with some coarsely graded intuitive feeling of certainty. You cannot open this black box and analyze its algorithms in order to upgrade this vague feeling into a precise numerical probability estimate. If you instead use heuristics that yield numerical probabilities, such as finding reference classes, this means side-stepping your black-box module and using an altogether different algorithm instead—and the probability estimate you’ll arrive at this way won’t be pertinent to your best analysis that uses the black-box intuition module.
Surely you can teach yourself to compare intuitive certainty to probabilities, though. I mean, if you come up with rough labels for levels of intuitive certainty, and record how often each label is right or wrong, you’d get a really rough corresponding probability already.
Edit: Oh, this is predictionbook’s raison d’être.