The basic idea seems to be that, as anything that can happen does happen and it is impossible to remember your own death, every living thing is immortal.
Note that this statement talks about a ritual of cognition, not about the world: it talks about what one can remember, but obviously it’s possible to infer that in some circumstances you’ll die, or that in counterfactuals following different past events you’ve died. So this kind of “immortality” is an artefact of an artificial limitation on the ways of conceptualizing real world, one that can easily be lifted and thus shown to be not about an actual property of the world.
I think it goes deeper than that. Basically, a purely egoistic being would want to optimize their predictions for futures in which they live, giving rise to QI. However, any being would want others to optimize their predictions for universes in which it lives, so that it can trust those others to not screw it over in universes where the others die. The stability point is thus caring about any world where your social context (ie. the people who you need to trust you) survives.
QI is not a statement of anticipation, it’s a weighing of futures, and a quite egocentric one at that. The logic falls away almost as soon as you care about anybody else.
[edit] Prediction: if this holds in general, people with families ought to be more accepting of their death.
In many possible worlds or specifically possible futures, there is a period of time where you live, and then an event of dying. If you care about yourself, you care about this event, you want to control it, and so you want to take these possible worlds into account. There might even be possible futures where you start out not being alive and then you are alive in them (revived somehow). You’d want to take these into account as well.
Note that this statement talks about a ritual of cognition, not about the world: it talks about what one can remember, but obviously it’s possible to infer that in some circumstances you’ll die, or that in counterfactuals following different past events you’ve died. So this kind of “immortality” is an artefact of an artificial limitation on the ways of conceptualizing real world, one that can easily be lifted and thus shown to be not about an actual property of the world.
Please expand this into a top-level post that we can link to whenever somebody starts to talk about quantum immortality.
I think it goes deeper than that. Basically, a purely egoistic being would want to optimize their predictions for futures in which they live, giving rise to QI. However, any being would want others to optimize their predictions for universes in which it lives, so that it can trust those others to not screw it over in universes where the others die. The stability point is thus caring about any world where your social context (ie. the people who you need to trust you) survives.
QI is not a statement of anticipation, it’s a weighing of futures, and a quite egocentric one at that. The logic falls away almost as soon as you care about anybody else.
[edit] Prediction: if this holds in general, people with families ought to be more accepting of their death.
In many possible worlds or specifically possible futures, there is a period of time where you live, and then an event of dying. If you care about yourself, you care about this event, you want to control it, and so you want to take these possible worlds into account. There might even be possible futures where you start out not being alive and then you are alive in them (revived somehow). You’d want to take these into account as well.