I strongly support SIA over SSA. I haven’t read this sequence yet. But it looks like the sequence is about why the consequences of SIA are superior to those of SSA. This is a fine project. But a set of reasons for SIA over SSA just as strong as its more acceptable consequences, I think, is its great theoretical coherence.
SIA says: given your prior, multiply every possible universe by the number/volume of observers indistinguishable from you in that universe, then normalize. This is intuitive, it has a nice meaning,* and it doesn’t have a discontinuity at zero observers.
*Namely: I’m a random member of the prior-probability-weighted set of possible observers indistinguishable from me.
For SSA, on the other hand, it’s hard to even explicate the anthropic update. But I think any formalization will require treating the update to zero probability for zero-indistinguishable-observers as a special case.
I strongly support SIA over SSA. I haven’t read this sequence yet. But it looks like the sequence is about why the consequences of SIA are superior to those of SSA. This is a fine project. But a set of reasons for SIA over SSA just as strong as its more acceptable consequences, I think, is its great theoretical coherence.
SIA says: given your prior, multiply every possible universe by the number/volume of observers indistinguishable from you in that universe, then normalize. This is intuitive, it has a nice meaning,* and it doesn’t have a discontinuity at zero observers.
*Namely: I’m a random member of the prior-probability-weighted set of possible observers indistinguishable from me.
For SSA, on the other hand, it’s hard to even explicate the anthropic update. But I think any formalization will require treating the update to zero probability for zero-indistinguishable-observers as a special case.