I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight.
In my view, if someone’s numbers are wrong, that should be dealt with on the object level (e.g. “0.001 is too low”, with arguments for why), rather than retreating to the meta level of “using numbers caused you to err”. The perspective I come from is wanting to avoid the opposite problem, where being vague about one’s beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.)
But I’ll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:
If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified… If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice...
As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they’re well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it’s a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice.
If my internal “Bayesian calculator” believes P(X) = 0.001, and X turns out to be true, I’m not made less wrong by having concealed the number, saying “I don’t think X is true” instead. Less embarrassed, perhaps, but not less wrong.
In my view, if someone’s numbers are wrong, that should be dealt with on the object level (e.g. “0.001 is too low”, with arguments for why), rather than retreating to the meta level of “using numbers caused you to err”.
Trouble is, sometimes numbers can be not even wrong, with their very definition lacking logical consistency or any defensible link with reality. It is that category that I am most concerned with, and I believe that it sadly occurs very often in practice, with entire fields of inquiry sometimes degenerating into meaningless games with such numbers. My honest impression is that in our day and age, such numerological fallacies have been responsible for much greater intellectual sins than the opposite fallacy of avoiding scrutiny by excessive vagueness, although the latter phenomenon is not negligible either.
You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:
Here we seem to be clashing about terminology. I think that “poor calibration” is too much of a euphemism for the situations I have in mind, namely those where sensible calibration is altogether impossible. I would instead use some stronger expression clarifying that the supposed “calibration” is done without any valid basis, not that the result is poor because some unfortunate circumstance occurred in the course of an otherwise sensible procedure.
There is such a thing as a poorly-calibrated Bayesian; it’s a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else.
As I explained in the above lengthy comment, I simply don’t find numbers that “refer specifically to degrees of belief, and not anything else” a coherent concept. We seem to be working with fundamentally different philosophical premises here.
Can these numerical “degrees of belief” somehow be linked to observable reality according to the criteria I defined in my reply to the points (1)-(2) above? If not, I don’t see how admitting such concepts can be of any use.
If my internal “Bayesian calculator” believes P(X) = 0.001, and X turns out to be true, I’m not made less wrong by having concealed the number, saying “I don’t think X is true” instead. Less embarrassed, perhaps, but not less wrong.
But if you do this 10,000 times, and the number of times X turns out to be true is small but nowhere close to 10, you are much more wrong than if you had just been saying “X is highly unlikely” all along.
On the other hand, if we’re observing X as a single event in isolation, I don’t see how this tests your probability estimate in any way. But I suspect we have some additional philosophical differences here.
I’ll point out here that reversed stupidity is not intelligence, and that for every possible error, there is an opposite possible error.
In my view, if someone’s numbers are wrong, that should be dealt with on the object level (e.g. “0.001 is too low”, with arguments for why), rather than retreating to the meta level of “using numbers caused you to err”. The perspective I come from is wanting to avoid the opposite problem, where being vague about one’s beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.)
But I’ll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:
As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they’re well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it’s a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice.
If my internal “Bayesian calculator” believes P(X) = 0.001, and X turns out to be true, I’m not made less wrong by having concealed the number, saying “I don’t think X is true” instead. Less embarrassed, perhaps, but not less wrong.
komponisto:
Trouble is, sometimes numbers can be not even wrong, with their very definition lacking logical consistency or any defensible link with reality. It is that category that I am most concerned with, and I believe that it sadly occurs very often in practice, with entire fields of inquiry sometimes degenerating into meaningless games with such numbers. My honest impression is that in our day and age, such numerological fallacies have been responsible for much greater intellectual sins than the opposite fallacy of avoiding scrutiny by excessive vagueness, although the latter phenomenon is not negligible either.
Here we seem to be clashing about terminology. I think that “poor calibration” is too much of a euphemism for the situations I have in mind, namely those where sensible calibration is altogether impossible. I would instead use some stronger expression clarifying that the supposed “calibration” is done without any valid basis, not that the result is poor because some unfortunate circumstance occurred in the course of an otherwise sensible procedure.
As I explained in the above lengthy comment, I simply don’t find numbers that “refer specifically to degrees of belief, and not anything else” a coherent concept. We seem to be working with fundamentally different philosophical premises here.
Can these numerical “degrees of belief” somehow be linked to observable reality according to the criteria I defined in my reply to the points (1)-(2) above? If not, I don’t see how admitting such concepts can be of any use.
But if you do this 10,000 times, and the number of times X turns out to be true is small but nowhere close to 10, you are much more wrong than if you had just been saying “X is highly unlikely” all along.
On the other hand, if we’re observing X as a single event in isolation, I don’t see how this tests your probability estimate in any way. But I suspect we have some additional philosophical differences here.