Not all worlds in which you continue to exist are pleasant ones. I think Michael Vassar once called quantum immortality the most horrifying hypothesis he had ever taken seriously, or something along those lines.
Sure, but the idea that we should ignore futures where we are dead will still have some bizarre implications. For example, it would strongly contradict Nick Bostrom’s MaxiPOK principle (maximize the probability of an OK outcome). In particular, if you thought that the development of AGI would lead to utopia with probability p u, near instant human extinction with probability p e and torture of humans with probability p _ t, where
p t << p u
then one would have a strong motive to accelerate the development of AGI as much as possible, because the total probability of mediocre outcomes due to non-extinction global catastrophes like resource depletion or nuclear war increases every year that AGI doesn’t get developed. Your actions would be dominated by trying to increase the strength of the inequality p t << p u whilst getting the job done quickly enough that p u was still bigger than the probability of ordinary global problems such as global warming happening in your development window. You would do this even at the expense of increasing the probability p e—potentially until it was > 0.5. You’d better be damn sure that anthropic reasoning is correct if you’re going to do this!
If there’s quantum immortality, what proportion of your lives would be likely to be acutely painful?
I don’t have an intuition on that one. It seems as though worlds in which something causes good health would predominate over just barely hanging on, but I’m unsure of this.
Not all worlds in which you continue to exist are pleasant ones. I think Michael Vassar once called quantum immortality the most horrifying hypothesis he had ever taken seriously, or something along those lines.
Indeed. In particular, “dying of old age” is pretty damn horrifying if you think quantum immortality holds.
Sure, but the idea that we should ignore futures where we are dead will still have some bizarre implications. For example, it would strongly contradict Nick Bostrom’s MaxiPOK principle (maximize the probability of an OK outcome). In particular, if you thought that the development of AGI would lead to utopia with probability p u, near instant human extinction with probability p e and torture of humans with probability p _ t, where
p t << p u
then one would have a strong motive to accelerate the development of AGI as much as possible, because the total probability of mediocre outcomes due to non-extinction global catastrophes like resource depletion or nuclear war increases every year that AGI doesn’t get developed. Your actions would be dominated by trying to increase the strength of the inequality p t << p u whilst getting the job done quickly enough that p u was still bigger than the probability of ordinary global problems such as global warming happening in your development window. You would do this even at the expense of increasing the probability p e—potentially until it was > 0.5. You’d better be damn sure that anthropic reasoning is correct if you’re going to do this!
If there’s quantum immortality, what proportion of your lives would be likely to be acutely painful?
I don’t have an intuition on that one. It seems as though worlds in which something causes good health would predominate over just barely hanging on, but I’m unsure of this.
Hunh. I’m glad I’m not the only person who has always found quantum immortality far more horrifying than nonexistence.