If QI is true, no matter how small is the share of the worlds where radical life extension is possible, I will eventually find myself in it, if not in 100, maybe in 1000 years.
What was that talk about ‘stable but improbable’ worlds? If someone cares enough to revive me (I assume my measure would mostly enter universes where I was being simulated), then that doesn’t seem likely. I also can’t fathom that an AI wanting to torture humans would take up a more-than-tiny share of such universes. Do you think such things are likely, or is it that their downsides are so bad that they must be figured into the utilitarian calculus?
The world where someone wants to revive you has low measure (may be not, but let’s assume), but if they will do it, they will preserve you there for very long time. For example, some semi-evil AI may want to revive you only to show red fishes for the next 10 billion years. It is a very unlikely world, but still probable. And if you are in, it is very stable.
But wait, doesn’t that require the computational theory of mind and ‘unification’ of identical experiences? If they don’t hold, then we can’t go into other universes regardless of whether MWI is true (if they do, then we could even if MWI is false). I would have to already be simulated, and if I am, then there’s no reason to suppose it is by the sort of AI you describe.
Your suggestion was based on the assumption of an AI doing it, correct? It isn’t something we can naturally fall into? Also, even if all your other assumptions are true, why suppose that ‘semi-evil’ AIs, which you apparently think have low measure, take the lion’s share of highly degraded experiences? Why wouldn’t a friendly (or at least friendlier) AI try to rescue them?
QI works only if at least three main assumptions hold, but we don’t know for sure if they are true or not. One is very large size of the universe, the second is “unification of identical experiences” and the third one is that we could ignore the decline of measure corresponding to survival in MWI. So, QI validity is uncertain. Personally I think that it is more likely to be true than untrue.
It was just a toy example of rare, but stable world. If friendly AIs are dominating the measure, you most likely will be resurrected by friendly AI. Moreover, friendly AI may try to dominate total measure to increase human chances to be resurrected by it and it could try to rescue humans from evil AIs.
the second is “unification of identical experiences”
I disagree. Quantum Immortality can still exist without it; it’s only this supposition of the AI ‘rescuing you’ that requires that. Also, if AIs are trying to grab as many humans as possible, there’s no special reason to focus on dying ones. They could just simulate all sorts of brain states with memories and varied experiences, and then immediately shut down the simulation.
If we assume that we cannot apply self-locating belief to our experience of time (and assume AIs are indeed doing this), we should expect at every moment to enter an AI-dominated world. If we can apply self-locating beliefs, then the simulation would almost certainly be already shut down and we would be in that world. Since we aren’t, there’s no reason to suppose that these AIs exist or that they can ‘grab a share of our souls’ at all.
The question is, can we apply self-locating belief to our experience of time?
and the third one is that we could ignore the decline of measure corresponding to survival in MWI
How would measure affect this? If you’re forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
How would measure affect this? If you’re forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
Agree, but some don’t.
We could be (and probably are) in AI-created simulation, may be it is a “resurrectional simulation”. But if friendly AIs dominate, there will be no drastic changes.
To escape creating just random minds, the future AI has to create a simulation of the history of the whole humanity, and it is still running, not maintained. I explored the topic of the resurrectional simulations here: https://philpapers.org/rec/TURYOL
Why wouldn’t it create random minds if it’s trying to grab as much ‘human-space’ as possible?
EDIT: Why focus on the potential of quantum immortality at all? There’s no special reason to focus on what happens when we *die*, in terms of AI simulation.
If QI is true, no matter how small is the share of the worlds where radical life extension is possible, I will eventually find myself in it, if not in 100, maybe in 1000 years.
What was that talk about ‘stable but improbable’ worlds? If someone cares enough to revive me (I assume my measure would mostly enter universes where I was being simulated), then that doesn’t seem likely. I also can’t fathom that an AI wanting to torture humans would take up a more-than-tiny share of such universes. Do you think such things are likely, or is it that their downsides are so bad that they must be figured into the utilitarian calculus?
The world where someone wants to revive you has low measure (may be not, but let’s assume), but if they will do it, they will preserve you there for very long time. For example, some semi-evil AI may want to revive you only to show red fishes for the next 10 billion years. It is a very unlikely world, but still probable. And if you are in, it is very stable.
But wait, doesn’t that require the computational theory of mind and ‘unification’ of identical experiences? If they don’t hold, then we can’t go into other universes regardless of whether MWI is true (if they do, then we could even if MWI is false). I would have to already be simulated, and if I am, then there’s no reason to suppose it is by the sort of AI you describe.
Your suggestion was based on the assumption of an AI doing it, correct? It isn’t something we can naturally fall into? Also, even if all your other assumptions are true, why suppose that ‘semi-evil’ AIs, which you apparently think have low measure, take the lion’s share of highly degraded experiences? Why wouldn’t a friendly (or at least friendlier) AI try to rescue them?
QI works only if at least three main assumptions hold, but we don’t know for sure if they are true or not. One is very large size of the universe, the second is “unification of identical experiences” and the third one is that we could ignore the decline of measure corresponding to survival in MWI. So, QI validity is uncertain. Personally I think that it is more likely to be true than untrue.
It was just a toy example of rare, but stable world. If friendly AIs are dominating the measure, you most likely will be resurrected by friendly AI. Moreover, friendly AI may try to dominate total measure to increase human chances to be resurrected by it and it could try to rescue humans from evil AIs.
I disagree. Quantum Immortality can still exist without it; it’s only this supposition of the AI ‘rescuing you’ that requires that. Also, if AIs are trying to grab as many humans as possible, there’s no special reason to focus on dying ones. They could just simulate all sorts of brain states with memories and varied experiences, and then immediately shut down the simulation.
If we assume that we cannot apply self-locating belief to our experience of time (and assume AIs are indeed doing this), we should expect at every moment to enter an AI-dominated world. If we can apply self-locating beliefs, then the simulation would almost certainly be already shut down and we would be in that world. Since we aren’t, there’s no reason to suppose that these AIs exist or that they can ‘grab a share of our souls’ at all.
The question is, can we apply self-locating belief to our experience of time?
How would measure affect this? If you’re forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
Agree, but some don’t.
We could be (and probably are) in AI-created simulation, may be it is a “resurrectional simulation”. But if friendly AIs dominate, there will be no drastic changes.
Why? Surely they’re trying to rescue us. Maintaining the simulation would take away resources from grabbing even more human-measure.
To escape creating just random minds, the future AI has to create a simulation of the history of the whole humanity, and it is still running, not maintained. I explored the topic of the resurrectional simulations here: https://philpapers.org/rec/TURYOL
Why wouldn’t it create random minds if it’s trying to grab as much ‘human-space’ as possible?
EDIT: Why focus on the potential of quantum immortality at all? There’s no special reason to focus on what happens when we *die*, in terms of AI simulation.