I totally expect to experience “afterlife” some day
The word “expectation” refers to probability. When probability is low, as in tossing a coin 1000 times and getting “heads” each time, we say that the event is “not expected”, even though it’s possible. Similarly, afterlife is strictly speaking possible, but it’s not expected in the sense that it only holds insignificant probability. With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger’s experiments with 1⁄2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying “for decision making purposes”. I don’t see how “expressing honest expectation” in the sense of your example would contribute to the choice of decisions. More generally, this sense of “expectation” doesn’t seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow.
The word “expectation” refers to probability. When probability is low, as in tossing a coin 1000 times and getting “heads” each time, we say that the event is “not expected”, even though it’s possible. Similarly, afterlife is strictly speaking possible, but it’s not expected in the sense that it only holds insignificant probability. With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger’s experiments with 1⁄2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
See also this post: Preference For (Many) Future Worlds.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying “for decision making purposes”. I don’t see how “expressing honest expectation” in the sense of your example would contribute to the choice of decisions. More generally, this sense of “expectation” doesn’t seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow.
See also: Preference For (Many) Future Worlds.