I don’t think H and T are rational agents in the first place, since they violate non-dogmatism: they place zero probability on non-tautologous propositions.
The common prior assumption, if true, is only supposed to apply amongst rational agents.
I would further point out that although I can’t use a classic Dutch Book to show H and T are irrational, I can use the relaxed Dutch Books of the sort used in the definition of logical induction—H and T are irrational because they expose themselves to unbounded exploitation. So I’m broadly using the same rationality framework to rule out H and T, as I am to argue for the common prior assumption.
The claim is more like: two TDT agents should never knowingly disagree about probabilities.
Here’s an intuition-pump. If I am sitting next to Alice and we disagree, we should have already bet with each other. Any bookie who comes along and tries to profit off of our disagreement should be unable to, because we’ve already made all the profitable exchanges we can. We’ve formed a Critch coalition, in order to coordinate rationally. So our apparent beliefs, going forward, will be a Bayesian mixture of our (would-be) individual beliefs. We will apparently have a common prior, when betting behavior is examined.
Sure, you can fix unbounded downside risk by giving H a finite budget. You can fix the dogmatism by making H have an ϵ=1/3↑↑↑3 probability of tails.
If you and H have a chance to bet with each other before going to the bookies, then the bookie won’t be able to Dutch book the two of you because you will have already separated H and H’s money.
If you can’t bet with H directly for some reason, then a bookie can Dutch book you and H, by acting as a middle man and skimming off some money.
I don’t think H and T are rational agents in the first place, since they violate non-dogmatism: they place zero probability on non-tautologous propositions.
The common prior assumption, if true, is only supposed to apply amongst rational agents.
I would further point out that although I can’t use a classic Dutch Book to show H and T are irrational, I can use the relaxed Dutch Books of the sort used in the definition of logical induction—H and T are irrational because they expose themselves to unbounded exploitation. So I’m broadly using the same rationality framework to rule out H and T, as I am to argue for the common prior assumption.
The claim is more like: two TDT agents should never knowingly disagree about probabilities.
Here’s an intuition-pump. If I am sitting next to Alice and we disagree, we should have already bet with each other. Any bookie who comes along and tries to profit off of our disagreement should be unable to, because we’ve already made all the profitable exchanges we can. We’ve formed a Critch coalition, in order to coordinate rationally. So our apparent beliefs, going forward, will be a Bayesian mixture of our (would-be) individual beliefs. We will apparently have a common prior, when betting behavior is examined.
Sure, you can fix unbounded downside risk by giving H a finite budget. You can fix the dogmatism by making H have an ϵ=1/3↑↑↑3 probability of tails.
If you and H have a chance to bet with each other before going to the bookies, then the bookie won’t be able to Dutch book the two of you because you will have already separated H and H’s money.
If you can’t bet with H directly for some reason, then a bookie can Dutch book you and H, by acting as a middle man and skimming off some money.