That’s not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.
Suppose that there is 50% ehance of there being a boltzmann brain copy of you
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem—the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.
Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it’s not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some “outside perspective” probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the “outside observer” is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.
This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities—after all, for every action, one can construct a possibility where it’s both good and bad. If there’s no way to break ties between possibilities, no ties get broken.
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem
Non-anthropic (“outside observer”) probabilities are well defined in the sleeping beauty problem—the probability of heads/tails is exactly 1⁄2 (most of the time, you can think of these as the SSA probabilities over universes—the only difference being in universes where you don’t exist at all). You can use a universal prior or whatever you prefer; the “outside observer” doesn’t need to observe anything or be present in any way.
I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.
And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?
That’s not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem—the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.
Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it’s not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some “outside perspective” probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the “outside observer” is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.
This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities—after all, for every action, one can construct a possibility where it’s both good and bad. If there’s no way to break ties between possibilities, no ties get broken.
Non-anthropic (“outside observer”) probabilities are well defined in the sleeping beauty problem—the probability of heads/tails is exactly 1⁄2 (most of the time, you can think of these as the SSA probabilities over universes—the only difference being in universes where you don’t exist at all). You can use a universal prior or whatever you prefer; the “outside observer” doesn’t need to observe anything or be present in any way.
I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.
And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?