And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?
And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?