Maybe: larger reference classes make the universes more likely, but make it less likely that you would be a specific member of that reference class, so when you update on who you are in the class, the two effects cancel out.
More conceptually: in SAI, the definition of reference class commutes with restrictions on that reference class. So it doesn’t matter if you take the reference class of all humans, then specialise to the ones alive today, then specialise to you; or take the reference class of all humans alive today, then specialise to you; or just take the reference class of you. SIA is, in a sense, sensible with respect to updating.
Another way of seeing SAI + update on yourself: weigh each universe by the expected number of exact (subjective) copies of you in them, then renormalise.
This result seems strange to me, even though the maths seems to check out. Is there a conceptual explanation of why this should be the case?
Maybe: larger reference classes make the universes more likely, but make it less likely that you would be a specific member of that reference class, so when you update on who you are in the class, the two effects cancel out.
More conceptually: in SAI, the definition of reference class commutes with restrictions on that reference class. So it doesn’t matter if you take the reference class of all humans, then specialise to the ones alive today, then specialise to you; or take the reference class of all humans alive today, then specialise to you; or just take the reference class of you. SIA is, in a sense, sensible with respect to updating.
Does that help?
Thanks, that’s helpful. Actually, now that you’ve put it that way, I recall having known this fact at some point in the past.
Another way of seeing SAI + update on yourself: weigh each universe by the expected number of exact (subjective) copies of you in them, then renormalise.