“Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update?”—As I’ve already argued, this doesn’t matter because we don’t have to even be able to talk to them! But I already provided a version of this problem where it’s a gameshow and the contestants are eliminated instead of killed.
Anyway, the possibilities are actually:
a) Bob observes that he survives the cold war
b) Bob observes that he didn’t survive the cold war
c) Bob doesn’t observe anything
You’re correct that b) is impossible, but c) isn’t, at least from the perspective of a pre-war Bob. Only a) is possible from the perspective of a post-war Bob, but only if he already knows that he is a post-war Bob. If he doesn’t know he is a post-war Bob, then it is new information and we should expect him to update on it.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
Update: The following may help. Bob is a man. Someone who never lies or is mistaken tells Bob that he is a man. Did Bob learn anything? No, if he already knew his gender; yes if he didn’t. Similarly, for the cold war example, Bob always know that he is alive, but it doesn’t automatically follow that he knows he survived the cold war or that such a war happened.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?
“Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update?”—As I’ve already argued, this doesn’t matter because we don’t have to even be able to talk to them! But I already provided a version of this problem where it’s a gameshow and the contestants are eliminated instead of killed.
Anyway, the possibilities are actually:
a) Bob observes that he survives the cold war
b) Bob observes that he didn’t survive the cold war
c) Bob doesn’t observe anything
You’re correct that b) is impossible, but c) isn’t, at least from the perspective of a pre-war Bob. Only a) is possible from the perspective of a post-war Bob, but only if he already knows that he is a post-war Bob. If he doesn’t know he is a post-war Bob, then it is new information and we should expect him to update on it.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
Update: The following may help. Bob is a man. Someone who never lies or is mistaken tells Bob that he is a man. Did Bob learn anything? No, if he already knew his gender; yes if he didn’t. Similarly, for the cold war example, Bob always know that he is alive, but it doesn’t automatically follow that he knows he survived the cold war or that such a war happened.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?