If you don’t believe in an afterlife, then it seems you currently have two choices...
Believing in afterlife doesn’t grant you one more option. This is a statement about ways of mitigating or avoiding death, and beliefs are not part of that subject matter. An improved version of the statement would say, “If there is no afterlife, then...”. In this form, it’s easier to notice that since it’s known with great certainty that there is no afterlife, the hypothetical isn’t worth mentioning.
since it’s known with great certainty that there is no afterlife, the hypothetical isn’t worth mentioning
I’m convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live “outside” their simulations after their “deaths”. Since one cannot feel one’s own nonexistence, I totally expect to experience “afterlife” some day.
I totally expect to experience “afterlife” some day
The word “expectation” refers to probability. When probability is low, as in tossing a coin 1000 times and getting “heads” each time, we say that the event is “not expected”, even though it’s possible. Similarly, afterlife is strictly speaking possible, but it’s not expected in the sense that it only holds insignificant probability. With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger’s experiments with 1⁄2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying “for decision making purposes”. I don’t see how “expressing honest expectation” in the sense of your example would contribute to the choice of decisions. More generally, this sense of “expectation” doesn’t seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow.
I think you may be treating your continuation as a binary affair (you either exist or don’t exist, you either experience or don’t experience) as if “you” (your mind) were an ontologically simple entity.
Let’s say that in the vast majority of universes you “die” from an external perspective. This means that from an internal perspective, in the vast majority of universe you’ll experience the degradation of your mental circuitry—whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything.
So let’s say that at some point your mind is at a state where you’re still sensing experiences, but don’t form new memories, nor hold any old memories; and because you don’t even have much of a short-term memory, your thinking doesn’t get more complicated than “Fuzzy warmth. Nice” or perhaps “Pain. Hurts!”.
At this point, this experience is all you effectively are—it’s not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality.
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife—in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right?
Quantum Immortality doesn’t exist. At best one can hope for Quantum Reincarnation—and even that requires certain unverified assumptions...
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix.
There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what’s left of me to all my best parts and memories retrieved from an adequate backup.
Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.
It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).
True. Believing doesn’t grant more options, but if you truly believe in an afterlife, then this is not a question that would concern you: you believe you have a better option. :)
If you believe in an afterlife, the question that concerns you is still whether there is an afterlife, not whether you believe in an afterlife. So you still should worry about the hypothetical of there being an afterlife, which you’d assign more probability, not about the hypothetical of you believing in an afterlife.
If you believe in an afterlife, the question that concerns you is still whether there is an afterlife, not whether you believe in an afterlife.
I think we are assigning different meanings to “believe”. In my sense, a true believer has no doubt, so “whether” is no longer a question. I think we may be getting sidetracked on semantics, though.
The overwhelming majority of the human population disagrees with you. Yes, rationally, we know with great certainty there is no afterlife. (well, at least we know with almost the same certainty that there is no flying spaghetti monster, the p value of each possibility is infitesmally smaller than 1)
But we choose to accept unverified statements from our elders regarding the afterlife rather than dwell on the facts of death and fail to procreate.
Believing in afterlife doesn’t grant you one more option. This is a statement about ways of mitigating or avoiding death, and beliefs are not part of that subject matter. An improved version of the statement would say, “If there is no afterlife, then...”. In this form, it’s easier to notice that since it’s known with great certainty that there is no afterlife, the hypothetical isn’t worth mentioning.
I’m convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live “outside” their simulations after their “deaths”. Since one cannot feel one’s own nonexistence, I totally expect to experience “afterlife” some day.
The word “expectation” refers to probability. When probability is low, as in tossing a coin 1000 times and getting “heads” each time, we say that the event is “not expected”, even though it’s possible. Similarly, afterlife is strictly speaking possible, but it’s not expected in the sense that it only holds insignificant probability. With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger’s experiments with 1⁄2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
See also this post: Preference For (Many) Future Worlds.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying “for decision making purposes”. I don’t see how “expressing honest expectation” in the sense of your example would contribute to the choice of decisions. More generally, this sense of “expectation” doesn’t seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow.
See also: Preference For (Many) Future Worlds.
I think you may be treating your continuation as a binary affair (you either exist or don’t exist, you either experience or don’t experience) as if “you” (your mind) were an ontologically simple entity.
Let’s say that in the vast majority of universes you “die” from an external perspective. This means that from an internal perspective, in the vast majority of universe you’ll experience the degradation of your mental circuitry—whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything.
So let’s say that at some point your mind is at a state where you’re still sensing experiences, but don’t form new memories, nor hold any old memories; and because you don’t even have much of a short-term memory, your thinking doesn’t get more complicated than “Fuzzy warmth. Nice” or perhaps “Pain. Hurts!”.
At this point, this experience is all you effectively are—it’s not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality.
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife—in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right?
Quantum Immortality doesn’t exist. At best one can hope for Quantum Reincarnation—and even that requires certain unverified assumptions...
There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what’s left of me to all my best parts and memories retrieved from an adequate backup.
Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.
It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).
True. Believing doesn’t grant more options, but if you truly believe in an afterlife, then this is not a question that would concern you: you believe you have a better option. :)
If you believe in an afterlife, the question that concerns you is still whether there is an afterlife, not whether you believe in an afterlife. So you still should worry about the hypothetical of there being an afterlife, which you’d assign more probability, not about the hypothetical of you believing in an afterlife.
I think we are assigning different meanings to “believe”. In my sense, a true believer has no doubt, so “whether” is no longer a question. I think we may be getting sidetracked on semantics, though.
The overwhelming majority of the human population disagrees with you. Yes, rationally, we know with great certainty there is no afterlife. (well, at least we know with almost the same certainty that there is no flying spaghetti monster, the p value of each possibility is infitesmally smaller than 1)
But we choose to accept unverified statements from our elders regarding the afterlife rather than dwell on the facts of death and fail to procreate.