Do you not think that this feeling response can be trained through calibration exercises and by making and checking predictions?
Well, sometimes frequentism can come to the rescue, in a sense. If you are repeatedly faced with an identical situation where it’s necessary to make some common-sense judgment, like e.g. on an assembly line, you can look at your past performance to predict how often you’ll be correct in the future. (This assuming you’re not getting better or worse with time, of course.) However, what you’re doing in that case is treating a part of your own brain as a black box whose behavior you’re testing empirically to extrapolate a frequentist rule—you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.
That said, it’s clear that smarter and more knowledgeable people think with greater accuracy and subtlety, so that their intuitive feelings of (un)certainty are also subtler and more accurate. But there is still no magic step that will translate these feelings output by black-box circuits in their brains into numbers that could lay claim to mathematical rigor and accuracy.
you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.
No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don’t dispute that one couldn’t lay claim to mathematical rigor, but I’m not sure that means that any human assignment of numerical probabilities is meaningless.
Yes, with good enough calibration, it does make sense. If you have an assembly line worker whose job is to notice and remove defective items, and he’s been doing it with a steady (say) 99.7% accuracy for a long time, it makes sense to assign p=0.997 to each single judgment he makes about an individual item, and this number can be of practical value in managing production. However, this doesn’t mean that you could improve the worker’s performance by teaching him about Bayesianism; his brain remains a black box. The important point is that the same typically holds for highbrow intellectual tasks too.
Moreover, for the great majority of interesting questions about the world, we don’t have the luxury of a large reference class of trials on which to calibrate. Take for example the recent discussion about the AD-36 virus controversy. If you look at the literature, you’ll presumably form an opinion about this question with a higher or lower certainty, depending on how much confidence you have in your own ability to judge about such matters. But how to calibrate this judgment in order to arrive at a probability estimate? There is no way.
Luke_Grecki:
Well, sometimes frequentism can come to the rescue, in a sense. If you are repeatedly faced with an identical situation where it’s necessary to make some common-sense judgment, like e.g. on an assembly line, you can look at your past performance to predict how often you’ll be correct in the future. (This assuming you’re not getting better or worse with time, of course.) However, what you’re doing in that case is treating a part of your own brain as a black box whose behavior you’re testing empirically to extrapolate a frequentist rule—you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.
That said, it’s clear that smarter and more knowledgeable people think with greater accuracy and subtlety, so that their intuitive feelings of (un)certainty are also subtler and more accurate. But there is still no magic step that will translate these feelings output by black-box circuits in their brains into numbers that could lay claim to mathematical rigor and accuracy.
No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don’t dispute that one couldn’t lay claim to mathematical rigor, but I’m not sure that means that any human assignment of numerical probabilities is meaningless.
Yes, with good enough calibration, it does make sense. If you have an assembly line worker whose job is to notice and remove defective items, and he’s been doing it with a steady (say) 99.7% accuracy for a long time, it makes sense to assign p=0.997 to each single judgment he makes about an individual item, and this number can be of practical value in managing production. However, this doesn’t mean that you could improve the worker’s performance by teaching him about Bayesianism; his brain remains a black box. The important point is that the same typically holds for highbrow intellectual tasks too.
Moreover, for the great majority of interesting questions about the world, we don’t have the luxury of a large reference class of trials on which to calibrate. Take for example the recent discussion about the AD-36 virus controversy. If you look at the literature, you’ll presumably form an opinion about this question with a higher or lower certainty, depending on how much confidence you have in your own ability to judge about such matters. But how to calibrate this judgment in order to arrive at a probability estimate? There is no way.