In this example, if I knew the Pythagorean theorem and had performed the calculation, I would be certain of the right answer. If I were not able to perform the calculation because of logical uncertainty (say the numbers were large) then relative to my current state of knowledge I could avoid dutch books by assigning probabilities to side lengths. This would make me impossible to money pump in the sense of cyclical preferences. The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
I am not trying to undercut the use of probability in the broad sense of using numbers to represent degrees of belief.
However, if “probability” means “the kolmogorov axioms”, we can easily undercut these by the argument you mention: we can consider a (quite realistic!) case where we don’t have enough computational power to enforce the kolmogorov axioms precisely. We conclude that we should avoid easily-computed dutch books, but may be vulnerable to some hard-to-compute dutch books.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
Yes, exactly. In the perspective I am offering, the only difference between bookies who we stop betting with due to a history of losing money, vs bookies we stop betting with due to a priori knowing better, is that the second kind corresponds to something we already knew (already had high prior weight on).
In the classical story, however, there are bookies we avoid a priori as a matter of logic alone (we could say that the classical perspective insists that the kolmogorov axioms are known a priori—which is completely fine and good if you’ve got the computational power to do it).
In this example, if I knew the Pythagorean theorem and had performed the calculation, I would be certain of the right answer. If I were not able to perform the calculation because of logical uncertainty (say the numbers were large) then relative to my current state of knowledge I could avoid dutch books by assigning probabilities to side lengths. This would make me impossible to money pump in the sense of cyclical preferences. The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
I am not trying to undercut the use of probability in the broad sense of using numbers to represent degrees of belief.
However, if “probability” means “the kolmogorov axioms”, we can easily undercut these by the argument you mention: we can consider a (quite realistic!) case where we don’t have enough computational power to enforce the kolmogorov axioms precisely. We conclude that we should avoid easily-computed dutch books, but may be vulnerable to some hard-to-compute dutch books.
Yes, exactly. In the perspective I am offering, the only difference between bookies who we stop betting with due to a history of losing money, vs bookies we stop betting with due to a priori knowing better, is that the second kind corresponds to something we already knew (already had high prior weight on).
In the classical story, however, there are bookies we avoid a priori as a matter of logic alone (we could say that the classical perspective insists that the kolmogorov axioms are known a priori—which is completely fine and good if you’ve got the computational power to do it).