This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
See also this post: Preference For (Many) Future Worlds.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”