“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?