… But for the question at the beginning of the post, we aren’t asking about the probability of survival (conditional on other factors), but the probability of the cold war being safe (conditional on survival). And that’s something very different:
P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival).
Now, P(survival | cold war safe) is greater that P(survival) by definition—that’s what “safe” means—hence P(cold war safe | survival) is greater than P(cold war safe). Thus survival is positive evidence for the cold war being safe.
Note that this doesn’t mean that the cold war was actually safe—it just means that the likelihood of it being safe is increased when we notice we survived.
No. Here is the correct formalization:
S = we observe that we have survived
Ws = Cold War safe
Wd = Cold War dangerous
We want P(Ws|S)—probability that the Cold War is safe, given that we observe that we have survived:
P(Ws|S) = P(Ws) × P(S|Ws) / P(S)
Note that P(S) = 1, because the other possibility—P(we observe that we haven’t survived)—is impossible. (Is it actually 1 minus epsilon? Let’s say it is; that doesn’t materially change the reasoning.)
This means that this—
… P(survival | cold war safe) [i.e., P(S|Ws). —SA] is greater that P(survival) [P(S)]** by definition …
… is false. P(S|Ws) is 1, and P(S) is 1. (Cogito ergo sum and so on.)
So observing that we have survived is not positive evidence for the Cold War being safe.
The negation of S is “we don’t observe we have survived”, which is perfectly possible.
Otherwise, your argument proves too much, and undoes all of probability theory. Suppose for the moment that a nuclear war wouldn’t have actually killed us, but just mutated us into mutants. Then let S’ be “us non-mutants observe that there was no nuclear war”. By your argument above, P(S’)=1, because us non-mutants cannot observe a nuclear war - only the mutant us can do so.
But the problem is now entirely non-anthropic. It seems to me that you have to either a) give up on probability altogether, or b) accept that the negation of S’ includes “us mutants observe a nuclear war”. Therefore the negation of a “X observes Y” can include options where X doesn’t exist.
The negation of S is “we don’t observe we have survived”, which is perfectly possible.
What do you mean by this? If you are referring to the fact that we can ask “have we survived the Cold War” and answer “not yet” (because the Cold War isn’t over yet), then I don’t see how this salvages your account. The question you asked to begin with is one which it is only possible to ask once the Cold War is over, so “not yet” is inapplicable, and “no” remains impossible.
If you mean something else, then please clarify.
As for the rest of your comment… it seems to me that if you accept that “us, after we’ve suffered some mutations” is somehow no longer the same observers as “us, now”, then you could also say that “us, a second from now” is also no longer the same observers as “us, now”, at which point you’re making some very strong (and very strange) claim about personal identity, continuity of consciousness, etc. Any such view does far more to undermine the very notion of subjective probability than does my account, which only points out that dead people can’t observe things.
I’m pointing out that the negation of S=”X observes A at time T” does not imply that X exists. S’=”X observes ~A at time T” is subset of ~S, but not the whole thing (X not existing at all at time T is also a negation, for example). Therefore, merely because S’ is impossible, does not mean that S is certain.
The point about introducing differences in observers, is that this is the kind of thing that your theory has to track, checking when an observer is sufficiently divergent that they can be considered different/the same. Since I take a more “god’s eye view” of these problems (extinctions can happen, even without observers to observe them), it doesn’t matter to me whether various observers are “the same” or not.
The key question here is what exactly is P(S). Let’s simplify and assume that there is a guy called Bob who was born before the Cold War and will survive to the end unless there is a nuclear holocaust. The question is: is P(S)
a) the probability that a pre-war Bob will survive until after the war and observe that he didn’t die
b) the probability that a post-war Bob will observe that they survived
Stuart Armstrong says a), you say b). Both of you are mathematically correct, so the question is which calculation is based upon the correct assumptions. If we use a), then we conclude that a pre-war Bob should update on observing that they didn’t die. Before we can analyse b), we have to point out that there is an implicit assumption in probability problems that the agent knows the problem. So Bob is implicitly assumed to know that he is a post-war Bob before he observes anything, then he is told to update on the observation that he survived. Obviously, updating on information that you already know doesn’t cause you to update at all.
This leads to two possibilities. Either we have a method of solving the problem directly for post-war Bobs, in which case there we solve this without ever updating on this information. Like perhaps we imagine running this experiment multiple times and we count up the number of Bobs who survive in the safe world and divide it by the number of Bobs in total (I’m not stating whether or not this approach is correct, just providing an example of a solution that avoids updating). Or we imagine a pre-war Bob who doesn’t have this information, then updates on receiving it. The one thing we don’t do is assume a Bob who already knows this information and then get him to update on it.
Before we can analyse b), we have to point out that there is an implicit assumption in probability problems that the agent knows the problem.
…
… we imagine a pre-war Bob who doesn’t have this information, then updates on receiving it.
Consider this formulation:
Bob has himself cryonically frozen in 1950. He is then thawed in 2018, but not told what year it is (yet). We ask Bob what he thinks is the probability of surviving the Cold War; he gives some number. We then let Bob read all about the Soviet Union’s collapse on Wikipedia; thus he now gains the information that he has survived the Cold War. Should he update on this?
Let’s suppose that there are two Bobs. One on a planet that ends up in a nuclear holocaust and one on a planet that doesn’t (effectively they flipped a coin). His subjective probability of the cold war not ending in a nuclear holocaust before he is told any information including the year is now effectively the sleeping beauty problem! So you’ve reduced an easier problem to a harder one, as we can’t calculate the impact of updating until we figure out the prior probability (we will actually need prior probabilities given different odds of surviving).
The way I was formulating this is as follows. It is 1950. Bob knows that the Cold War is going to happen and that there is a good chance of it ending in destruction. After the Cold War ends, we tell Bob (if he survives) or his ghost (if he does not) whether he survived or he is a ghost. If we tell Bob that he is not a ghost, then he’ll update on this information; note that this isn’t actually contingent on us talking to his ghost at all. So the fact that we can’t talk to his ghost doesn’t matter.
By the way, I don’t think I saw an explicit answer to my question about Bob who is cryonically frozen in 1950. Should he update his probability estimate of Cold War safety upon learning of recent history, or not?
Why not have him update? If it’s new info his probability will change, if it’s old info it will remain the same. It’s never new info and the probability doesn’t change. This only happens if you’ve implicitly assumed he knows it.
Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update (which they cannot, in fact, do—this being the very core of the whole “being dead” thing)? If you cannot, then this fact makes me very suspicious, even if I can’t pinpoint exactly where the error (if any) is.
Having said that, let me take a crack at pinpointing the problem. It seems to me that one way of formulating probabilities of future events (which I recall Eliezer using many a time in the Sequences) is “how much do you anticipate observation X?”.
But the crux of the matter is that Bob does not anticipate ever observing that he has not survived the Cold War! And no matter what happens, Bob will never find that his failure-to-anticipate this outcome has turned out to be wrong. (Edit: In other words, Bob will never be surprised by his observations.)
Another way of talking about probabilities is to talk about bets. So let’s say that Bob offers to make a deal with you: “If I observe that I’ve survived the Cold War, you pay me a dollar. But if I ever observe that I haven’t survived the Cold War, then I will pay you a million trillion kajillion dollars.”[1] Will you take this bet, if you estimate the probability of Bob surviving the Cold War to be, perhaps, 99.9%? Follow-up question #1: what does a willingness to make this bet imply about Bob’s probability estimate of his survival? Follow-up question #2: (a) what odds should Bob give you, on this bet; (b) what odds would leave Bob with a negative expected profit from the bet?
[1] After the disastrous outcome of the Cold War has devalued the dollar, this figure is worth only about $1,000 in today’s money. Still, that’s more than $1!
“Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update?”—As I’ve already argued, this doesn’t matter because we don’t have to even be able to talk to them! But I already provided a version of this problem where it’s a gameshow and the contestants are eliminated instead of killed.
Anyway, the possibilities are actually:
a) Bob observes that he survives the cold war
b) Bob observes that he didn’t survive the cold war
c) Bob doesn’t observe anything
You’re correct that b) is impossible, but c) isn’t, at least from the perspective of a pre-war Bob. Only a) is possible from the perspective of a post-war Bob, but only if he already knows that he is a post-war Bob. If he doesn’t know he is a post-war Bob, then it is new information and we should expect him to update on it.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
Update: The following may help. Bob is a man. Someone who never lies or is mistaken tells Bob that he is a man. Did Bob learn anything? No, if he already knew his gender; yes if he didn’t. Similarly, for the cold war example, Bob always know that he is alive, but it doesn’t automatically follow that he knows he survived the cold war or that such a war happened.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?
If we’re going to bet, we have to bet on the right thing. The question isn’t about whether I survived the Cold War (which we already know), it’s about whether the Cold War was dangerous. So, what would I actually bet on? I don’t know how to quantify the dangerousness of the Cold War in real life, but here’s a simpler scenario: if Omega flipped a coin before the Cold War and, if it came up heads, he precisely adjusted the circumstances to make it so the Cold War had a 25% chance of killing me, and if tails, a 75% chance, then asked me to bet on heads or tails now in 2018, I would be willing to bet on heads at up to 3:1 odds.
No. Here is the correct formalization:
S = we observe that we have survived
Ws = Cold War safe
Wd = Cold War dangerous
We want P(Ws|S)—probability that the Cold War is safe, given that we observe that we have survived:
P(Ws|S) = P(Ws) × P(S|Ws) / P(S)
Note that P(S) = 1, because the other possibility—P(we observe that we haven’t survived)—is impossible. (Is it actually 1 minus epsilon? Let’s say it is; that doesn’t materially change the reasoning.)
This means that this—
… is false. P(S|Ws) is 1, and P(S) is 1. (Cogito ergo sum and so on.)
So observing that we have survived is not positive evidence for the Cold War being safe.
The negation of S is “we don’t observe we have survived”, which is perfectly possible.
Otherwise, your argument proves too much, and undoes all of probability theory. Suppose for the moment that a nuclear war wouldn’t have actually killed us, but just mutated us into mutants. Then let S’ be “us non-mutants observe that there was no nuclear war”. By your argument above, P(S’)=1, because us non-mutants cannot observe a nuclear war - only the mutant us can do so.
But the problem is now entirely non-anthropic. It seems to me that you have to either a) give up on probability altogether, or b) accept that the negation of S’ includes “us mutants observe a nuclear war”. Therefore the negation of a “X observes Y” can include options where X doesn’t exist.
What do you mean by this? If you are referring to the fact that we can ask “have we survived the Cold War” and answer “not yet” (because the Cold War isn’t over yet), then I don’t see how this salvages your account. The question you asked to begin with is one which it is only possible to ask once the Cold War is over, so “not yet” is inapplicable, and “no” remains impossible.
If you mean something else, then please clarify.
As for the rest of your comment… it seems to me that if you accept that “us, after we’ve suffered some mutations” is somehow no longer the same observers as “us, now”, then you could also say that “us, a second from now” is also no longer the same observers as “us, now”, at which point you’re making some very strong (and very strange) claim about personal identity, continuity of consciousness, etc. Any such view does far more to undermine the very notion of subjective probability than does my account, which only points out that dead people can’t observe things.
I’m pointing out that the negation of S=”X observes A at time T” does not imply that X exists. S’=”X observes ~A at time T” is subset of ~S, but not the whole thing (X not existing at all at time T is also a negation, for example). Therefore, merely because S’ is impossible, does not mean that S is certain.
The point about introducing differences in observers, is that this is the kind of thing that your theory has to track, checking when an observer is sufficiently divergent that they can be considered different/the same. Since I take a more “god’s eye view” of these problems (extinctions can happen, even without observers to observe them), it doesn’t matter to me whether various observers are “the same” or not.
The key question here is what exactly is P(S). Let’s simplify and assume that there is a guy called Bob who was born before the Cold War and will survive to the end unless there is a nuclear holocaust. The question is: is P(S)
a) the probability that a pre-war Bob will survive until after the war and observe that he didn’t die
b) the probability that a post-war Bob will observe that they survived
Stuart Armstrong says a), you say b). Both of you are mathematically correct, so the question is which calculation is based upon the correct assumptions. If we use a), then we conclude that a pre-war Bob should update on observing that they didn’t die. Before we can analyse b), we have to point out that there is an implicit assumption in probability problems that the agent knows the problem. So Bob is implicitly assumed to know that he is a post-war Bob before he observes anything, then he is told to update on the observation that he survived. Obviously, updating on information that you already know doesn’t cause you to update at all.
This leads to two possibilities. Either we have a method of solving the problem directly for post-war Bobs, in which case there we solve this without ever updating on this information. Like perhaps we imagine running this experiment multiple times and we count up the number of Bobs who survive in the safe world and divide it by the number of Bobs in total (I’m not stating whether or not this approach is correct, just providing an example of a solution that avoids updating). Or we imagine a pre-war Bob who doesn’t have this information, then updates on receiving it. The one thing we don’t do is assume a Bob who already knows this information and then get him to update on it.
Consider this formulation:
Bob has himself cryonically frozen in 1950. He is then thawed in 2018, but not told what year it is (yet). We ask Bob what he thinks is the probability of surviving the Cold War; he gives some number. We then let Bob read all about the Soviet Union’s collapse on Wikipedia; thus he now gains the information that he has survived the Cold War. Should he update on this?
Let’s suppose that there are two Bobs. One on a planet that ends up in a nuclear holocaust and one on a planet that doesn’t (effectively they flipped a coin). His subjective probability of the cold war not ending in a nuclear holocaust before he is told any information including the year is now effectively the sleeping beauty problem! So you’ve reduced an easier problem to a harder one, as we can’t calculate the impact of updating until we figure out the prior probability (we will actually need prior probabilities given different odds of surviving).
The way I was formulating this is as follows. It is 1950. Bob knows that the Cold War is going to happen and that there is a good chance of it ending in destruction. After the Cold War ends, we tell Bob (if he survives) or his ghost (if he does not) whether he survived or he is a ghost. If we tell Bob that he is not a ghost, then he’ll update on this information; note that this isn’t actually contingent on us talking to his ghost at all. So the fact that we can’t talk to his ghost doesn’t matter.
By the way, I don’t think I saw an explicit answer to my question about Bob who is cryonically frozen in 1950. Should he update his probability estimate of Cold War safety upon learning of recent history, or not?
Why not have him update? If it’s new info his probability will change, if it’s old info it will remain the same. It’s never new info and the probability doesn’t change. This only happens if you’ve implicitly assumed he knows it.
Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update (which they cannot, in fact, do—this being the very core of the whole “being dead” thing)? If you cannot, then this fact makes me very suspicious, even if I can’t pinpoint exactly where the error (if any) is.
Having said that, let me take a crack at pinpointing the problem. It seems to me that one way of formulating probabilities of future events (which I recall Eliezer using many a time in the Sequences) is “how much do you anticipate observation X?”.
But the crux of the matter is that Bob does not anticipate ever observing that he has not survived the Cold War! And no matter what happens, Bob will never find that his failure-to-anticipate this outcome has turned out to be wrong. (Edit: In other words, Bob will never be surprised by his observations.)
Another way of talking about probabilities is to talk about bets. So let’s say that Bob offers to make a deal with you: “If I observe that I’ve survived the Cold War, you pay me a dollar. But if I ever observe that I haven’t survived the Cold War, then I will pay you a million trillion kajillion dollars.”[1] Will you take this bet, if you estimate the probability of Bob surviving the Cold War to be, perhaps, 99.9%? Follow-up question #1: what does a willingness to make this bet imply about Bob’s probability estimate of his survival? Follow-up question #2: (a) what odds should Bob give you, on this bet; (b) what odds would leave Bob with a negative expected profit from the bet?
[1] After the disastrous outcome of the Cold War has devalued the dollar, this figure is worth only about $1,000 in today’s money. Still, that’s more than $1!
“Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update?”—As I’ve already argued, this doesn’t matter because we don’t have to even be able to talk to them! But I already provided a version of this problem where it’s a gameshow and the contestants are eliminated instead of killed.
Anyway, the possibilities are actually:
a) Bob observes that he survives the cold war
b) Bob observes that he didn’t survive the cold war
c) Bob doesn’t observe anything
You’re correct that b) is impossible, but c) isn’t, at least from the perspective of a pre-war Bob. Only a) is possible from the perspective of a post-war Bob, but only if he already knows that he is a post-war Bob. If he doesn’t know he is a post-war Bob, then it is new information and we should expect him to update on it.
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
Update: The following may help. Bob is a man. Someone who never lies or is mistaken tells Bob that he is a man. Did Bob learn anything? No, if he already knew his gender; yes if he didn’t. Similarly, for the cold war example, Bob always know that he is alive, but it doesn’t automatically follow that he knows he survived the cold war or that such a war happened.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?
If we’re going to bet, we have to bet on the right thing. The question isn’t about whether I survived the Cold War (which we already know), it’s about whether the Cold War was dangerous. So, what would I actually bet on? I don’t know how to quantify the dangerousness of the Cold War in real life, but here’s a simpler scenario: if Omega flipped a coin before the Cold War and, if it came up heads, he precisely adjusted the circumstances to make it so the Cold War had a 25% chance of killing me, and if tails, a 75% chance, then asked me to bet on heads or tails now in 2018, I would be willing to bet on heads at up to 3:1 odds.