I think this is a bad way to think about probabilities under the Everett interpretation, for two reasons.
First, it’s a fully general argument against caring about the possibility of your own death. If this were a good way of thinking, then if you offer me $1 to play Russian roulette with bullets in 5 of the 6 chambers then I should take it—because the only branches where I continue to exist are ones where I didn’t get killed. That’s obviously stupid: it cannot possibly be unreasonable to care whether or not one dies. If it were a necessary consequence of the Everett interpretation, then I might say “OK, this means that one can’t coherently accept the Everett interpretation” or “hmm, seems like I have to completely rethink my preferences”, but in fact it is not a necessary consequence of the Everett interpretation.
Second, it ignores the possibility of branches where we survive but horribly. In that Russian roulette game, there are cases where I do get shot through the head but survive with terrible brain damage. In the unfriendly-AI scenarios, there are cases where the human race survives but unhappily. In either case the probability is small, but maybe not so small as a fraction of survival cases.
I think the only reasonable attitude to one’s future branches, if one accepts the Everett interpretation, is to care about all those branches, including those where one doesn’t survive, with weight corresponding to |psi|^2. That is, to treat “quantum probabilities” the same way as “ordinary probabilities”. (This attitude seems perfectly reasonable to me conditional on Everett.)
I think this is a bad way to think about probabilities under the Everett interpretation, for two reasons.
First, it’s a fully general argument against caring about the possibility of your own death. If this were a good way of thinking, then if you offer me $1 to play Russian roulette with bullets in 5 of the 6 chambers then I should take it—because the only branches where I continue to exist are ones where I didn’t get killed. That’s obviously stupid: it cannot possibly be unreasonable to care whether or not one dies. If it were a necessary consequence of the Everett interpretation, then I might say “OK, this means that one can’t coherently accept the Everett interpretation” or “hmm, seems like I have to completely rethink my preferences”, but in fact it is not a necessary consequence of the Everett interpretation.
Second, it ignores the possibility of branches where we survive but horribly. In that Russian roulette game, there are cases where I do get shot through the head but survive with terrible brain damage. In the unfriendly-AI scenarios, there are cases where the human race survives but unhappily. In either case the probability is small, but maybe not so small as a fraction of survival cases.
I think the only reasonable attitude to one’s future branches, if one accepts the Everett interpretation, is to care about all those branches, including those where one doesn’t survive, with weight corresponding to |psi|^2. That is, to treat “quantum probabilities” the same way as “ordinary probabilities”. (This attitude seems perfectly reasonable to me conditional on Everett.)