Seems like it’s “much weaker” evidence if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
To expand: imagine a probability distribution over the amount of person-killing power that gets released as a consequence of nukes. Imagine it’s got a single bump well past the boundary where total extinction is expected. That means worlds where more people die are more likely[1].
If you sample, according to its probability mass, some world where someone survived, then our current world is quite surprising.
If instead you upweight the masses by how many people are in each, then you aren’t that surprised to be in our world
[1]: Well, there might be a wrinkle here with the boundary at 0 and a bunch of probability mass getting “piled up” there.
Disagree. SIA always updates towards hypotheses that allow more people to exist (the Self Indication Assumption is that your own existence as an observer indicates that there are more observerss), which makes for an update that nuclear war is rare, since there will exist more people in the multiverse if nuclear accidents are rare. This exactly balances out the claim about selection effects – so SIA corresponds to the naive update-rule which says that world-destroying activities must be rare, since we haven’t seen them. The argument about observer selection effects only comes from SSA-ish theories.
Note that, in anthropic dilemmans, total consequentialist ethics + UDT makes the same decisions as SIA + CDT, as explained by Stuart Armstrong here. This makes me think that total consequentialists shouldn’t care about observer selection effects.
This is complicated by the fact that infinities breaks both anthropic theories and ethical theories. UDASSA might solve this. In practice, I think UDASSA behaves a bit like a combination of SSA and SIA, but that it is a bit closer to SIA, but I haven’t thought a lot about this.
I think you misread which direction the ‘“much weaker” evidence’ is supposed to be going, and that we agree (unless the key claim is about SIA exactly balancing selection effects)
There’s probably some misunderstanding, but I’m not immediately spotting it when rereading. You wrote:
Seems like it’s “much weaker” evidence [[for X]] if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
Going by the parent comment, I’m interpreting this as
it = “we didn’t observe nukes going off”
X = “humans are competent at handling dangerous technology”
I think that
SIA thinks that “we didn’t observe nukes going off” is relatively stronger evidence for “humans are competent at handling dangerous technology” (because SIA ignores observer selection effects, and updates naively).
SSA thinks that “we didn’t observe nukes going off” is relatively weaker evidence for “humans are competent at handling dangerous technology” (because SSA doesn’t update against hypothesis which would kill everyone).
Not sure I’m parsing your earlier comment correctly, but I think you say “SIA says there should be more people everywhere, because then I’m more likely to exist. More people everywhere means I think my existence is evidence for people handling nukes correctly everywhere”. I’m less sure what you say about SSA, either “SSA still considers the possibility that nukes are regularly mishandled in a way that kills everyone” or “SSA says you should also consider yourself selected from the worlds with no observers”.
Do I have you right?
I say, “SIA says that if your prior is ’10% everyone survives, 20% only 5% survive, 70% everyone dies’, and you notice you’re in a ‘survived’ world, you should think you are in the ‘everyone survives’ world with 90% probability (as that’s where 90% of the probability-weighted survivors are)”.
Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?
Seems like it’s “much weaker” evidence if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.
To expand: imagine a probability distribution over the amount of person-killing power that gets released as a consequence of nukes. Imagine it’s got a single bump well past the boundary where total extinction is expected. That means worlds where more people die are more likely[1].
If you sample, according to its probability mass, some world where someone survived, then our current world is quite surprising.
If instead you upweight the masses by how many people are in each, then you aren’t that surprised to be in our world
[1]: Well, there might be a wrinkle here with the boundary at 0 and a bunch of probability mass getting “piled up” there.
Yes, that’s right.
My model is much more similar to ASSA than SIA, but it gives the SIA answer in this case.
Disagree. SIA always updates towards hypotheses that allow more people to exist (the Self Indication Assumption is that your own existence as an observer indicates that there are more observerss), which makes for an update that nuclear war is rare, since there will exist more people in the multiverse if nuclear accidents are rare. This exactly balances out the claim about selection effects – so SIA corresponds to the naive update-rule which says that world-destroying activities must be rare, since we haven’t seen them. The argument about observer selection effects only comes from SSA-ish theories.
Note that, in anthropic dilemmans, total consequentialist ethics + UDT makes the same decisions as SIA + CDT, as explained by Stuart Armstrong here. This makes me think that total consequentialists shouldn’t care about observer selection effects.
This is complicated by the fact that infinities breaks both anthropic theories and ethical theories. UDASSA might solve this. In practice, I think UDASSA behaves a bit like a combination of SSA and SIA, but that it is a bit closer to SIA, but I haven’t thought a lot about this.
I think you misread which direction the ‘“much weaker” evidence’ is supposed to be going, and that we agree (unless the key claim is about SIA exactly balancing selection effects)
There’s probably some misunderstanding, but I’m not immediately spotting it when rereading. You wrote:
Going by the parent comment, I’m interpreting this as
it = “we didn’t observe nukes going off”
X = “humans are competent at handling dangerous technology”
I think that
SIA thinks that “we didn’t observe nukes going off” is relatively stronger evidence for “humans are competent at handling dangerous technology” (because SIA ignores observer selection effects, and updates naively).
SSA thinks that “we didn’t observe nukes going off” is relatively weaker evidence for “humans are competent at handling dangerous technology” (because SSA doesn’t update against hypothesis which would kill everyone).
Which seems to contradict what you wrote?
Yep, sorry, looks like we do disagree.
Not sure I’m parsing your earlier comment correctly, but I think you say “SIA says there should be more people everywhere, because then I’m more likely to exist. More people everywhere means I think my existence is evidence for people handling nukes correctly everywhere”. I’m less sure what you say about SSA, either “SSA still considers the possibility that nukes are regularly mishandled in a way that kills everyone” or “SSA says you should also consider yourself selected from the worlds with no observers”.
Do I have you right?
I say, “SIA says that if your prior is ’10% everyone survives, 20% only 5% survive, 70% everyone dies’, and you notice you’re in a ‘survived’ world, you should think you are in the ‘everyone survives’ world with 90% probability (as that’s where 90% of the probability-weighted survivors are)”.
Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?