How much amplitude is non-negligible? It seems like the amplitude that you have now is probably already negligible: in the vast majority of the multiverse, you do not exist or are already dead. So it doesn’t seem to make much sense to base expected value calculations on the amount of amplitude left.
I’d say that you should not care about how much amplitude you have now (because there’s nothing you can do about it now), only about how much of that will you maintain in the future. The reason would be roughly that this is the amplitude-maximization algorithm.
Yeah, compared with the whole universe (or multiverse) even your best is already pretty close to zero. But there’s nothing you can do about it. You should only care about things you can change. (Of course once in a while you should check whether your ideas about “what you can change” correspond to reality.)
Similarly to how you shouldn’t buy lottery tickets because it’s not worth doing… however, if you find yourself in a situation where you somehow got the winning ticket (because you bought it anyway, or someone gave it to you, doesn’t matter), you should try to spend the money wisely. The chance of winning the lottery is small if it didn’t happen yet, but huge if you already are inside. You shouldn’t throw the money away just because “the chances of this happening were small anyway”. Your existence here and now is an example of an unlikely ticket that won anyway.
Intuitively, if you imagine the Everett branches, you should imagine yourself as a programmer of millions of tiny copies of you living in the future. Each copy should do the best they can, ignoring the other copies. But if there is something you can do now to increase the average happiness of the copies, you should do it, even if it makes some copies worse. That’s the paradox—you (now) are allowed to harm some copies, but no copy is allowed to harmitself. For example, by not buying the lottery ticket you are doing great harm to the copy living in the future where your “lucky numbers” won. That’s okay because in return million other copies got an extra dollar to spend. But if you buy the ticket anyway, the lucky copy is required to maximize the benefits they get from the winnings.
Same for “quantum immortality”. If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000, good for you, enjoy the future (assuming it is enjoyable, which is far from certain). But the today-you should not make plans that include killing most of the future copies just because they didn’t win some kind of lottery.
But the today-you should not make plans that include killing most of the future copies just because they didn’t win some kind of lottery.
I don’t think the “killing most of your future copies” scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.
In any case, I’m not sure I’m buying the amplitude-maximization thing. Supposedly there’s an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I’m just maximising utility, why wouldn’t it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?
If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000
“If”. The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it’s inevitable.
Then there are some copies [...] who will live forever.
The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn’t outweigh everything else.
I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.
What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.
If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.
(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it’s been instantiated in some situation where none of what’s required to keep it around is present. In a large enough universe this will happen extremely often—though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be “living for ever”.)
Maybe not. But let’s suppose there was no “real world” at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the “next state”, wouldn’t this give a subjective feeling of immortality, and wouldn’t it be impossible for us to tell the difference between this situation and the “real world”?
In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.
How much amplitude is non-negligible? It seems like the amplitude that you have now is probably already negligible: in the vast majority of the multiverse, you do not exist or are already dead. So it doesn’t seem to make much sense to base expected value calculations on the amount of amplitude left.
I’d say that you should not care about how much amplitude you have now (because there’s nothing you can do about it now), only about how much of that will you maintain in the future. The reason would be roughly that this is the amplitude-maximization algorithm.
Yeah, compared with the whole universe (or multiverse) even your best is already pretty close to zero. But there’s nothing you can do about it. You should only care about things you can change. (Of course once in a while you should check whether your ideas about “what you can change” correspond to reality.)
Similarly to how you shouldn’t buy lottery tickets because it’s not worth doing… however, if you find yourself in a situation where you somehow got the winning ticket (because you bought it anyway, or someone gave it to you, doesn’t matter), you should try to spend the money wisely. The chance of winning the lottery is small if it didn’t happen yet, but huge if you already are inside. You shouldn’t throw the money away just because “the chances of this happening were small anyway”. Your existence here and now is an example of an unlikely ticket that won anyway.
Intuitively, if you imagine the Everett branches, you should imagine yourself as a programmer of millions of tiny copies of you living in the future. Each copy should do the best they can, ignoring the other copies. But if there is something you can do now to increase the average happiness of the copies, you should do it, even if it makes some copies worse. That’s the paradox—you (now) are allowed to harm some copies, but no copy is allowed to harm itself. For example, by not buying the lottery ticket you are doing great harm to the copy living in the future where your “lucky numbers” won. That’s okay because in return million other copies got an extra dollar to spend. But if you buy the ticket anyway, the lucky copy is required to maximize the benefits they get from the winnings.
Same for “quantum immortality”. If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000, good for you, enjoy the future (assuming it is enjoyable, which is far from certain). But the today-you should not make plans that include killing most of the future copies just because they didn’t win some kind of lottery.
I don’t think the “killing most of your future copies” scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.
In any case, I’m not sure I’m buying the amplitude-maximization thing. Supposedly there’s an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I’m just maximising utility, why wouldn’t it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?
“If”. The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it’s inevitable.
The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn’t outweigh everything else.
I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.
What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.
If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.
(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it’s been instantiated in some situation where none of what’s required to keep it around is present. In a large enough universe this will happen extremely often—though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be “living for ever”.)
Maybe not. But let’s suppose there was no “real world” at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the “next state”, wouldn’t this give a subjective feeling of immortality, and wouldn’t it be impossible for us to tell the difference between this situation and the “real world”?
In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.
Is this feedback that I should update my model of the second sort of people? I’ll take it as such, and edit the post above.