you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.
No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don’t dispute that one couldn’t lay claim to mathematical rigor, but I’m not sure that means that any human assignment of numerical probabilities is meaningless.
Yes, with good enough calibration, it does make sense. If you have an assembly line worker whose job is to notice and remove defective items, and he’s been doing it with a steady (say) 99.7% accuracy for a long time, it makes sense to assign p=0.997 to each single judgment he makes about an individual item, and this number can be of practical value in managing production. However, this doesn’t mean that you could improve the worker’s performance by teaching him about Bayesianism; his brain remains a black box. The important point is that the same typically holds for highbrow intellectual tasks too.
Moreover, for the great majority of interesting questions about the world, we don’t have the luxury of a large reference class of trials on which to calibrate. Take for example the recent discussion about the AD-36 virus controversy. If you look at the literature, you’ll presumably form an opinion about this question with a higher or lower certainty, depending on how much confidence you have in your own ability to judge about such matters. But how to calibrate this judgment in order to arrive at a probability estimate? There is no way.
No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don’t dispute that one couldn’t lay claim to mathematical rigor, but I’m not sure that means that any human assignment of numerical probabilities is meaningless.
Yes, with good enough calibration, it does make sense. If you have an assembly line worker whose job is to notice and remove defective items, and he’s been doing it with a steady (say) 99.7% accuracy for a long time, it makes sense to assign p=0.997 to each single judgment he makes about an individual item, and this number can be of practical value in managing production. However, this doesn’t mean that you could improve the worker’s performance by teaching him about Bayesianism; his brain remains a black box. The important point is that the same typically holds for highbrow intellectual tasks too.
Moreover, for the great majority of interesting questions about the world, we don’t have the luxury of a large reference class of trials on which to calibrate. Take for example the recent discussion about the AD-36 virus controversy. If you look at the literature, you’ll presumably form an opinion about this question with a higher or lower certainty, depending on how much confidence you have in your own ability to judge about such matters. But how to calibrate this judgment in order to arrive at a probability estimate? There is no way.