Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?
Using examples is neat. I’d characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it’s way less likely that everybody dies). Prior:
50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.
Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn’t care about reference classes, but SSA does.) Then:
SSA thinks
Notice we’re in a world where everyone survived (as opposed to only 5%) ->
if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
if C isn’t true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
Thus, the odds ratio is 70/71:10/11.
Our prior being 1:1, the resulting probability is ~52% that C is true.
SIA thinks
Notice we’re alive ->
the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
Notice we’re in a world where everyone survived (as opposed to only 5%).
The odds ratio is 70/71:10/11, as earlier.
So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70⁄10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn’t care about the reference classes.
Note that we only care about the number of people surviving after a nuclear accident because we’ve included them in SSA’s reference class. But I don’t know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren’t affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).
Really appreciate you taking the time to go through this!
To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.
I think in your SSA, you treat all observers in the conditioned-on world set as “actually existing”. But shouldn’t you treat only the observers in a single world as “actually existing”? That is, you notice you’re in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7⁄9.
And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say “Look, we have 10⁄11 of being in an everyone survived world even given not-C. So it isn’t strong evidence for C to find ourselves in an everyone survived world”.
It’s not yet clear to me (possibly because I am confused) that I definitely shouldn’t do this kind of reasoning. It’s tempting to say something like “I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don’t know which, but there’s not an anthropic effect about which way they’re assigned, while there is an anthropic effect within any particular assignment”. Perhaps this is more like ASSA than SIA?