You can think of classical Bayesian reasoning as justified by Dutch Book arguments. However, for a Dutch Book argument to be convincing, there’s an important condition that we need: the bookie needs to be just as ignorant as the agent. If the bookie makes money off the agent because the bookie knows an insider secret about the horse race, we don’t think of this as “irrational” on the part of the agent.
This assumption is typically packaged into the part of a Dutch Book argument where we say the Dutch Book “guarantees a net loss”—if the bookie is using insider knowledge, then it’s not a “guarantee” of a net loss. This “guarantee” needs to be made with respect to all the ways things could empirically turn out.
However, this distinction becomes fuzzier when we consider embedded agency, and in particular, computational uncertainty. If the agent has observed the length of two sides of a right triangle, then it is possible to compute the length of the remaining side. Should we say, on the one hand, that there is a Dutch Book against agents who do not correctly compute this third length? Or should we complain that a bookie who has completed the computation has special insider knowledge, which our agent may lack due to not having completed the computation?
If we bite the “no principled distinction” bullet, we can develop a theory where we learn to avoid making logical mistakes (such as classical Dutch Books, or the triangle example) in exactly the same manner that we learn to avoid empirical mistakes (such as learning that the sun rises every morning). Instead of getting a guarantee that we never give in to a Dutch Book, we get a bounded-violations guarantee; we can only lose so much money that way before we wise up.
In this example, if I knew the Pythagorean theorem and had performed the calculation, I would be certain of the right answer. If I were not able to perform the calculation because of logical uncertainty (say the numbers were large) then relative to my current state of knowledge I could avoid dutch books by assigning probabilities to side lengths. This would make me impossible to money pump in the sense of cyclical preferences. The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
I am not trying to undercut the use of probability in the broad sense of using numbers to represent degrees of belief.
However, if “probability” means “the kolmogorov axioms”, we can easily undercut these by the argument you mention: we can consider a (quite realistic!) case where we don’t have enough computational power to enforce the kolmogorov axioms precisely. We conclude that we should avoid easily-computed dutch books, but may be vulnerable to some hard-to-compute dutch books.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
Yes, exactly. In the perspective I am offering, the only difference between bookies who we stop betting with due to a history of losing money, vs bookies we stop betting with due to a priori knowing better, is that the second kind corresponds to something we already knew (already had high prior weight on).
In the classical story, however, there are bookies we avoid a priori as a matter of logic alone (we could say that the classical perspective insists that the kolmogorov axioms are known a priori—which is completely fine and good if you’ve got the computational power to do it).
Here’s an explanation that may help.
You can think of classical Bayesian reasoning as justified by Dutch Book arguments. However, for a Dutch Book argument to be convincing, there’s an important condition that we need: the bookie needs to be just as ignorant as the agent. If the bookie makes money off the agent because the bookie knows an insider secret about the horse race, we don’t think of this as “irrational” on the part of the agent.
This assumption is typically packaged into the part of a Dutch Book argument where we say the Dutch Book “guarantees a net loss”—if the bookie is using insider knowledge, then it’s not a “guarantee” of a net loss. This “guarantee” needs to be made with respect to all the ways things could empirically turn out.
However, this distinction becomes fuzzier when we consider embedded agency, and in particular, computational uncertainty. If the agent has observed the length of two sides of a right triangle, then it is possible to compute the length of the remaining side. Should we say, on the one hand, that there is a Dutch Book against agents who do not correctly compute this third length? Or should we complain that a bookie who has completed the computation has special insider knowledge, which our agent may lack due to not having completed the computation?
If we bite the “no principled distinction” bullet, we can develop a theory where we learn to avoid making logical mistakes (such as classical Dutch Books, or the triangle example) in exactly the same manner that we learn to avoid empirical mistakes (such as learning that the sun rises every morning). Instead of getting a guarantee that we never give in to a Dutch Book, we get a bounded-violations guarantee; we can only lose so much money that way before we wise up.
In this example, if I knew the Pythagorean theorem and had performed the calculation, I would be certain of the right answer. If I were not able to perform the calculation because of logical uncertainty (say the numbers were large) then relative to my current state of knowledge I could avoid dutch books by assigning probabilities to side lengths. This would make me impossible to money pump in the sense of cyclical preferences. The fact that I could gamble more wisely if I had access to more computation doesn’t seem to undercut the reasons for using probabilities when I don’t.
Now in the extreme adversarial case, a bookie could come along who knows my computational limits and only offers me bets where I lose in expectation. But this is also a problem for empirical uncertainty; in both cases, if you literally face a bookie who is consistently winning money from you, you could eventually infer that they know more than you and stop accepting their bets. I still see no fundamental difference between empirical and logical uncertainties.
I am not trying to undercut the use of probability in the broad sense of using numbers to represent degrees of belief.
However, if “probability” means “the kolmogorov axioms”, we can easily undercut these by the argument you mention: we can consider a (quite realistic!) case where we don’t have enough computational power to enforce the kolmogorov axioms precisely. We conclude that we should avoid easily-computed dutch books, but may be vulnerable to some hard-to-compute dutch books.
Yes, exactly. In the perspective I am offering, the only difference between bookies who we stop betting with due to a history of losing money, vs bookies we stop betting with due to a priori knowing better, is that the second kind corresponds to something we already knew (already had high prior weight on).
In the classical story, however, there are bookies we avoid a priori as a matter of logic alone (we could say that the classical perspective insists that the kolmogorov axioms are known a priori—which is completely fine and good if you’ve got the computational power to do it).