The other camp says “No nuclear weapons have been used or detonated accidentally since 1945. This is the optimal outcome, so I guess this is evidence that humanity is good at handling dangerous technology.”
When I look at that fact and Wikipedia’s list of close calls, the most plausible explanation doesn’t seem to be “it was unlikely for nuclear weapons to be used” or “it was likely for nuclear weapons to be used, yet we got lucky” but rather “nuclear weapons were probably used in most branches of the multiverse, but those have significantly fewer observers, so we don’t observe those worlds because of the survivorship bias.”
This requires that MW is true, that the part of anthropic reasoning is correct, and that a usage of nuclear weapons does, indeed, decrease the number of observers significantly. I’m not sure about the third, but pretty sure about the first two. The conjunction of all three seems significantly more likely than either of the two alternatives.
I don’t have insights on the remaining part of your post, but I think you’re admitting to losing Bayes points that you should not, in fact, be losing. [Edit: meaning you should still lose some but not that many.]
I don’t really know how to think about anthropics, sadly.
But I think that it’s pretty likely that nuclear war could have not killed everyone. So I still lose Bayes points compared to the world where nukes were fired but not everyone died.
Nuclear war doesn’t have to kill everyone to make our world non-viable for anthropic reasons. It just has to render our world unlikely to be simulated.
To be clear, after I made it, I thought more about it and I’m not sure it’s correct. I think I’d have to actually do the math, my intuitions aren’t coming in loud and clear here. The reason I’m unsure is that even if for some reason post-apocalyptic worlds rarely get simulated (and thus it’s very unsurprising that we find ourself in a world that didn’t suffer an apocalypse, because we’re probably in a simulation) it may be that we ought to ignore this, since we are trying to act as if we are not simulated anyway, since that’s how we have the most influence or something.
if for some reason post-apocalyptic worlds rarely get simulated
To draw out the argument a little further, the reason that post-apocalyptic worlds don’t get simulated is because most (?) of the simulations of our era are a way to simulate super intelligences in other parts of the multiverse, to talk or trade with.
(As in the basic argument of this Jan Tallinn talk)
If advanced civilization is wiped out by nuclear war, that simulation might be terminated, if it seems sufficiently unlikely to lead to a singularity.
Yep. What I was thinking was: Maybe most simulations of our era are made for the purpose of acausal trade or something like it. And maybe societies that are ravaged by nuclear war make for poor trading partners for some reason. (e.g. maybe they never rebuild, or maybe it takes too long to figure out whether or not they eventually rebuild that it’s not worth the cost of simulating them, or maybe they rebuild but in a way that makes them poor trading partners.) So then the situation would be: Even if most civilizations in our era nuke themselves in a way that doesn’t lead to extinction, the vast majority of people in our era would be in a civilization that didn’t, because they’d be in a simulation of one of the few civilizations that didn’t.
What I’m confused about right now is what the policy implications are of this. As I understand it, the dialectic is something like: A: Nuclear war isn’t worth worrying about because we’ve survived it for 60 years so far, so it must be very unlikely. B: But anthropics! Maybe actually the probability of nuclear war is fairly high. Because of anthropics we’d never know; dead people aren’t observers. A: But nuclear war wouldn’t have killed everyone; if nuclear war is likely, shouldn’t we expect to find ourselves in some post-apocalyptic civilization? Me: But simulations! If post-apocalyptic civilizations are unlikely to be simulated, then it could be that nuclear war is actually pretty likely after all, and we just don’t know because we’re in one of the simulations of the precious few civilizations that avoided nuclear war. Simulations that launch nukes get shut down. Me 2: OK, but… maybe that means that nuclear war is unlikely after all? Or at least, should be treated as unlikely? Me: Why? Me 2: I’m not sure… something something we should ignore hypotheses in which we are simulated because most of our expected impact comes from hypotheses in which we aren’t? Me: That doesn’t seem like it would justify ignoring nuclear war. Look, YOU are the one who has the burden of proof; you need to argue that nuclear war is unlikely on the grounds that it hasn’t happened so far, but I’ve presented a good rebuttal to that argument. Me 2: OK let’s do some math. Two worlds. In World Safe, nuclear war is rare. In World Dangerous, nuclear war is common. In both worlds, most people in our era are simulations and moreover there are no simulations of post-apocalyptic eras. Instead of doing updates, let’s just ask what policy is the best way to hedge our bets between these two worlds… Well, what the simulations do doesn’t matter so much, so we should make a policy that mostly just optimizes for what the non-simulations do. And most of the non-simulations with evidence like ours are in World Safe. So the best policy is to treat nukes as dangerous.
OK, that felt good. I think I tentatively agree with Me 2.
[EDIT: Lol I mean “treat nukes as NOT dangerous/likely” what a typo!]
Hm, interesting. This suggests that, if we’re in a simulation, nuclear war is relatively more likely. However, all such simulations are likely to be shortlived, so if we’re in a simulation, we shouldn’t care about preventing nuclear war for longtermist reasons (only for short-termist ones). And if we think we’re sufficiently likely to be outside a simulation to make longterm concerns dominate short-termist ones (obligatory reference), then we should just condition on not being in a simulation, and then I think this point doesn’t matter.
Yeah, the “we didn’t observe nukes going off” observation is definitely still some evidence for the “humans are competent at handling dangerous technology” hypothesis, but (if one buys into the argument I’m making) it’s much weaker evidence than one would naively think.
Seems like it’s “much weaker” evidence if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
To expand: imagine a probability distribution over the amount of person-killing power that gets released as a consequence of nukes. Imagine it’s got a single bump well past the boundary where total extinction is expected. That means worlds where more people die are more likely[1].
If you sample, according to its probability mass, some world where someone survived, then our current world is quite surprising.
If instead you upweight the masses by how many people are in each, then you aren’t that surprised to be in our world
[1]: Well, there might be a wrinkle here with the boundary at 0 and a bunch of probability mass getting “piled up” there.
Disagree. SIA always updates towards hypotheses that allow more people to exist (the Self Indication Assumption is that your own existence as an observer indicates that there are more observerss), which makes for an update that nuclear war is rare, since there will exist more people in the multiverse if nuclear accidents are rare. This exactly balances out the claim about selection effects – so SIA corresponds to the naive update-rule which says that world-destroying activities must be rare, since we haven’t seen them. The argument about observer selection effects only comes from SSA-ish theories.
Note that, in anthropic dilemmans, total consequentialist ethics + UDT makes the same decisions as SIA + CDT, as explained by Stuart Armstrong here. This makes me think that total consequentialists shouldn’t care about observer selection effects.
This is complicated by the fact that infinities breaks both anthropic theories and ethical theories. UDASSA might solve this. In practice, I think UDASSA behaves a bit like a combination of SSA and SIA, but that it is a bit closer to SIA, but I haven’t thought a lot about this.
I think you misread which direction the ‘“much weaker” evidence’ is supposed to be going, and that we agree (unless the key claim is about SIA exactly balancing selection effects)
There’s probably some misunderstanding, but I’m not immediately spotting it when rereading. You wrote:
Seems like it’s “much weaker” evidence [[for X]] if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
Going by the parent comment, I’m interpreting this as
it = “we didn’t observe nukes going off”
X = “humans are competent at handling dangerous technology”
I think that
SIA thinks that “we didn’t observe nukes going off” is relatively stronger evidence for “humans are competent at handling dangerous technology” (because SIA ignores observer selection effects, and updates naively).
SSA thinks that “we didn’t observe nukes going off” is relatively weaker evidence for “humans are competent at handling dangerous technology” (because SSA doesn’t update against hypothesis which would kill everyone).
Not sure I’m parsing your earlier comment correctly, but I think you say “SIA says there should be more people everywhere, because then I’m more likely to exist. More people everywhere means I think my existence is evidence for people handling nukes correctly everywhere”. I’m less sure what you say about SSA, either “SSA still considers the possibility that nukes are regularly mishandled in a way that kills everyone” or “SSA says you should also consider yourself selected from the worlds with no observers”.
Do I have you right?
I say, “SIA says that if your prior is ’10% everyone survives, 20% only 5% survive, 70% everyone dies’, and you notice you’re in a ‘survived’ world, you should think you are in the ‘everyone survives’ world with 90% probability (as that’s where 90% of the probability-weighted survivors are)”.
Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?
There haven’t been anthropogenic risks that killed 10% of humans. The anthropic update on “10% of people killed” is pretty small. (World War 2 killed ~2% and feels like the strongest example against the “Them” position.)
You could believe that most risks are all-or-nothing, in which case I agree the “Them” position is considerably weaker due to anthropic effects.
This argument sounds like it’s SSA-ish (it certainly doesn’t work for SIA). I haven’t personally looked into this, but I think Anders Sandberg uses SSA for his analysis in this podcast, where he claims that taking taking observer selection effects into account changes the estimated risk of nuclear war by less than a factor of 2 (search for “not even twice”), because of some mathy details making use of near-miss statistics. So if one is willing to trust Anders to be right about this (I don’t think the argument is written up anywhere yet?) observer selection effects wouldn’t matter much regardless of your anthropics.
When I look at that fact and Wikipedia’s list of close calls, the most plausible explanation doesn’t seem to be “it was unlikely for nuclear weapons to be used” or “it was likely for nuclear weapons to be used, yet we got lucky” but rather “nuclear weapons were probably used in most branches of the multiverse, but those have significantly fewer observers, so we don’t observe those worlds because of the survivorship bias.”
This requires that MW is true, that the part of anthropic reasoning is correct, and that a usage of nuclear weapons does, indeed, decrease the number of observers significantly. I’m not sure about the third, but pretty sure about the first two. The conjunction of all three seems significantly more likely than either of the two alternatives.
I don’t have insights on the remaining part of your post, but I think you’re admitting to losing Bayes points that you should not, in fact, be losing. [Edit: meaning you should still lose some but not that many.]
I don’t really know how to think about anthropics, sadly.
But I think that it’s pretty likely that nuclear war could have not killed everyone. So I still lose Bayes points compared to the world where nukes were fired but not everyone died.
Nuclear war doesn’t have to kill everyone to make our world non-viable for anthropic reasons. It just has to render our world unlikely to be simulated.
I feel like this is a very important point that I have never heard made before.
To be clear, after I made it, I thought more about it and I’m not sure it’s correct. I think I’d have to actually do the math, my intuitions aren’t coming in loud and clear here. The reason I’m unsure is that even if for some reason post-apocalyptic worlds rarely get simulated (and thus it’s very unsurprising that we find ourself in a world that didn’t suffer an apocalypse, because we’re probably in a simulation) it may be that we ought to ignore this, since we are trying to act as if we are not simulated anyway, since that’s how we have the most influence or something.
To draw out the argument a little further, the reason that post-apocalyptic worlds don’t get simulated is because most (?) of the simulations of our era are a way to simulate super intelligences in other parts of the multiverse, to talk or trade with.
(As in the basic argument of this Jan Tallinn talk)
If advanced civilization is wiped out by nuclear war, that simulation might be terminated, if it seems sufficiently unlikely to lead to a singularity.
Yep. What I was thinking was: Maybe most simulations of our era are made for the purpose of acausal trade or something like it. And maybe societies that are ravaged by nuclear war make for poor trading partners for some reason. (e.g. maybe they never rebuild, or maybe it takes too long to figure out whether or not they eventually rebuild that it’s not worth the cost of simulating them, or maybe they rebuild but in a way that makes them poor trading partners.) So then the situation would be: Even if most civilizations in our era nuke themselves in a way that doesn’t lead to extinction, the vast majority of people in our era would be in a civilization that didn’t, because they’d be in a simulation of one of the few civilizations that didn’t.
What I’m confused about right now is what the policy implications are of this. As I understand it, the dialectic is something like:
A: Nuclear war isn’t worth worrying about because we’ve survived it for 60 years so far, so it must be very unlikely.
B: But anthropics! Maybe actually the probability of nuclear war is fairly high. Because of anthropics we’d never know; dead people aren’t observers.
A: But nuclear war wouldn’t have killed everyone; if nuclear war is likely, shouldn’t we expect to find ourselves in some post-apocalyptic civilization?
Me: But simulations! If post-apocalyptic civilizations are unlikely to be simulated, then it could be that nuclear war is actually pretty likely after all, and we just don’t know because we’re in one of the simulations of the precious few civilizations that avoided nuclear war. Simulations that launch nukes get shut down.
Me 2: OK, but… maybe that means that nuclear war is unlikely after all? Or at least, should be treated as unlikely?
Me: Why?
Me 2: I’m not sure… something something we should ignore hypotheses in which we are simulated because most of our expected impact comes from hypotheses in which we aren’t?
Me: That doesn’t seem like it would justify ignoring nuclear war. Look, YOU are the one who has the burden of proof; you need to argue that nuclear war is unlikely on the grounds that it hasn’t happened so far, but I’ve presented a good rebuttal to that argument.
Me 2: OK let’s do some math. Two worlds. In World Safe, nuclear war is rare. In World Dangerous, nuclear war is common. In both worlds, most people in our era are simulations and moreover there are no simulations of post-apocalyptic eras. Instead of doing updates, let’s just ask what policy is the best way to hedge our bets between these two worlds… Well, what the simulations do doesn’t matter so much, so we should make a policy that mostly just optimizes for what the non-simulations do. And most of the non-simulations with evidence like ours are in World Safe. So the best policy is to treat nukes as dangerous.
OK, that felt good. I think I tentatively agree with Me 2.
[EDIT: Lol I mean “treat nukes as NOT dangerous/likely” what a typo!]
Hm, interesting. This suggests that, if we’re in a simulation, nuclear war is relatively more likely. However, all such simulations are likely to be shortlived, so if we’re in a simulation, we shouldn’t care about preventing nuclear war for longtermist reasons (only for short-termist ones). And if we think we’re sufficiently likely to be outside a simulation to make longterm concerns dominate short-termist ones (obligatory reference), then we should just condition on not being in a simulation, and then I think this point doesn’t matter.
Yeah, the “we didn’t observe nukes going off” observation is definitely still some evidence for the “humans are competent at handling dangerous technology” hypothesis, but (if one buys into the argument I’m making) it’s much weaker evidence than one would naively think.
Seems like it’s “much weaker” evidence if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
To expand: imagine a probability distribution over the amount of person-killing power that gets released as a consequence of nukes. Imagine it’s got a single bump well past the boundary where total extinction is expected. That means worlds where more people die are more likely[1].
If you sample, according to its probability mass, some world where someone survived, then our current world is quite surprising.
If instead you upweight the masses by how many people are in each, then you aren’t that surprised to be in our world
[1]: Well, there might be a wrinkle here with the boundary at 0 and a bunch of probability mass getting “piled up” there.
Yes, that’s right.
My model is much more similar to ASSA than SIA, but it gives the SIA answer in this case.
Disagree. SIA always updates towards hypotheses that allow more people to exist (the Self Indication Assumption is that your own existence as an observer indicates that there are more observerss), which makes for an update that nuclear war is rare, since there will exist more people in the multiverse if nuclear accidents are rare. This exactly balances out the claim about selection effects – so SIA corresponds to the naive update-rule which says that world-destroying activities must be rare, since we haven’t seen them. The argument about observer selection effects only comes from SSA-ish theories.
Note that, in anthropic dilemmans, total consequentialist ethics + UDT makes the same decisions as SIA + CDT, as explained by Stuart Armstrong here. This makes me think that total consequentialists shouldn’t care about observer selection effects.
This is complicated by the fact that infinities breaks both anthropic theories and ethical theories. UDASSA might solve this. In practice, I think UDASSA behaves a bit like a combination of SSA and SIA, but that it is a bit closer to SIA, but I haven’t thought a lot about this.
I think you misread which direction the ‘“much weaker” evidence’ is supposed to be going, and that we agree (unless the key claim is about SIA exactly balancing selection effects)
There’s probably some misunderstanding, but I’m not immediately spotting it when rereading. You wrote:
Going by the parent comment, I’m interpreting this as
it = “we didn’t observe nukes going off”
X = “humans are competent at handling dangerous technology”
I think that
SIA thinks that “we didn’t observe nukes going off” is relatively stronger evidence for “humans are competent at handling dangerous technology” (because SIA ignores observer selection effects, and updates naively).
SSA thinks that “we didn’t observe nukes going off” is relatively weaker evidence for “humans are competent at handling dangerous technology” (because SSA doesn’t update against hypothesis which would kill everyone).
Which seems to contradict what you wrote?
Yep, sorry, looks like we do disagree.
Not sure I’m parsing your earlier comment correctly, but I think you say “SIA says there should be more people everywhere, because then I’m more likely to exist. More people everywhere means I think my existence is evidence for people handling nukes correctly everywhere”. I’m less sure what you say about SSA, either “SSA still considers the possibility that nukes are regularly mishandled in a way that kills everyone” or “SSA says you should also consider yourself selected from the worlds with no observers”.
Do I have you right?
I say, “SIA says that if your prior is ’10% everyone survives, 20% only 5% survive, 70% everyone dies’, and you notice you’re in a ‘survived’ world, you should think you are in the ‘everyone survives’ world with 90% probability (as that’s where 90% of the probability-weighted survivors are)”.
Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?
Copied from a comment above:
There haven’t been anthropogenic risks that killed 10% of humans. The anthropic update on “10% of people killed” is pretty small. (World War 2 killed ~2% and feels like the strongest example against the “Them” position.)
You could believe that most risks are all-or-nothing, in which case I agree the “Them” position is considerably weaker due to anthropic effects.
This argument sounds like it’s SSA-ish (it certainly doesn’t work for SIA). I haven’t personally looked into this, but I think Anders Sandberg uses SSA for his analysis in this podcast, where he claims that taking taking observer selection effects into account changes the estimated risk of nuclear war by less than a factor of 2 (search for “not even twice”), because of some mathy details making use of near-miss statistics. So if one is willing to trust Anders to be right about this (I don’t think the argument is written up anywhere yet?) observer selection effects wouldn’t matter much regardless of your anthropics.