But the today-you should not make plans that include killing most of the future copies just because they didn’t win some kind of lottery.
I don’t think the “killing most of your future copies” scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.
In any case, I’m not sure I’m buying the amplitude-maximization thing. Supposedly there’s an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I’m just maximising utility, why wouldn’t it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?
If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000
“If”. The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it’s inevitable.
Then there are some copies [...] who will live forever.
The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn’t outweigh everything else.
I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.
What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.
If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.
(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it’s been instantiated in some situation where none of what’s required to keep it around is present. In a large enough universe this will happen extremely often—though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be “living for ever”.)
Maybe not. But let’s suppose there was no “real world” at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the “next state”, wouldn’t this give a subjective feeling of immortality, and wouldn’t it be impossible for us to tell the difference between this situation and the “real world”?
In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.
I don’t think the “killing most of your future copies” scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.
In any case, I’m not sure I’m buying the amplitude-maximization thing. Supposedly there’s an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I’m just maximising utility, why wouldn’t it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?
“If”. The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it’s inevitable.
The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn’t outweigh everything else.
I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.
What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.
If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.
(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it’s been instantiated in some situation where none of what’s required to keep it around is present. In a large enough universe this will happen extremely often—though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be “living for ever”.)
Maybe not. But let’s suppose there was no “real world” at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the “next state”, wouldn’t this give a subjective feeling of immortality, and wouldn’t it be impossible for us to tell the difference between this situation and the “real world”?
In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.