It doesn’t seem too much more distressing to believe that there are copies of me being tortured right now, than to believe that there are currently people in North Korea being tortured right now, or other similarly unpleasant facts everyone agrees to be true.
There’s a distinction between intuitive identity—my ability to get really upset about the idea that me-ten-minutes-from-now will be tortured—and philosophical identity—an ability to worry slightly about the idea that a copy of me in another universe is getting tortured. This difference isn’t just instrumentally based on the fact that it’s easier for me to save me-ten-minutes-from-now than me-in-another-universe; even if I were offered some opportunity to help me-in-another-universe, I would feel obligated to do so only on grounds of charity, not on grounds of selfishness. I’d ground that as some mental program that intuitively makes me care about me-ten-minutes-from-now which is much stronger than whatever rational kinship I can muster with me-in-another-universe. This mental program seems pretty good at dealing with minor breaks in continuity like sleep or coma.
The problem is, once death comes into the picture, the mental program can’t carry on business as usual—there won’t be any “me-ten-minutes-from-now”. And one reaction is to automatically switch allegiance to the closest copy of me—for example, cryonically-resurrected-me-a-century-from-now. I don’t think this allegiance-switching has any fundamental ontological basis, but I’m not prepared to say it’s stupid either. My point here is only that once you’re okay with switching allegiances, you might as well do it to the nearest other pre-existing copy of you, rather than go through all the trouble of creating a new one.
I agree that we can’t ground identity in individual moments. For one thing, the only reasonable candidate for “moment” is the Planck time, and there’s no experience that can happen on that short an interval. For another, static experiences don’t seem to be conscious: if I were frozen in time, I couldn’t think “Darnit, I’m frozen in time now!” because that thought involves a state change. I think this is what you’re saying your third to last paragraph but I’m not sure.
I’m leaning towards saying my identification with self-at-the-present-moment isn’t any more interesting or fundamental than my artificially created identification with me-ten-minutes-from-now, and that a feeling of being in the present is just a basis for other computational processes. As far as I can understand, this doesn’t seem to be your solution at all.
Do any of the various theories marketed as “timeless” here claim that the belief in a present moment is purely indexical—that is, a function of the randomly chosen observer-moment currently experienced as “me” being Yvain(2012) as opposed to Yvain(2013), in the same sense that seeing a quantum coin come up heads instead of tails is indexical? It seems like an elegant idea and would be relevant to this discussion.
The problem is, once death comes into the picture, the mental program can’t carry on business as usual—there won’t be any “me-ten-minutes-from-now”. And one reaction is to automatically switch allegiance to the closest copy of me—for example, cryonically-resurrected-me-a-century-from-now.
Once you get resurrected, wont the mental program continue carrying business as usual and so wont the “me-ten-minutes-from-now” keep being there?
It doesn’t seem too much more distressing to believe that there are copies of me being tortured right now, than to believe that there are currently people in North Korea being tortured right now, or other similarly unpleasant facts everyone agrees to be true.
There’s a distinction between intuitive identity—my ability to get really upset about the idea that me-ten-minutes-from-now will be tortured—and philosophical identity—an ability to worry slightly about the idea that a copy of me in another universe is getting tortured. This difference isn’t just instrumentally based on the fact that it’s easier for me to save me-ten-minutes-from-now than me-in-another-universe; even if I were offered some opportunity to help me-in-another-universe, I would feel obligated to do so only on grounds of charity, not on grounds of selfishness. I’d ground that as some mental program that intuitively makes me care about me-ten-minutes-from-now which is much stronger than whatever rational kinship I can muster with me-in-another-universe. This mental program seems pretty good at dealing with minor breaks in continuity like sleep or coma.
The problem is, once death comes into the picture, the mental program can’t carry on business as usual—there won’t be any “me-ten-minutes-from-now”. And one reaction is to automatically switch allegiance to the closest copy of me—for example, cryonically-resurrected-me-a-century-from-now. I don’t think this allegiance-switching has any fundamental ontological basis, but I’m not prepared to say it’s stupid either. My point here is only that once you’re okay with switching allegiances, you might as well do it to the nearest other pre-existing copy of you, rather than go through all the trouble of creating a new one.
I agree that we can’t ground identity in individual moments. For one thing, the only reasonable candidate for “moment” is the Planck time, and there’s no experience that can happen on that short an interval. For another, static experiences don’t seem to be conscious: if I were frozen in time, I couldn’t think “Darnit, I’m frozen in time now!” because that thought involves a state change. I think this is what you’re saying your third to last paragraph but I’m not sure.
I’m leaning towards saying my identification with self-at-the-present-moment isn’t any more interesting or fundamental than my artificially created identification with me-ten-minutes-from-now, and that a feeling of being in the present is just a basis for other computational processes. As far as I can understand, this doesn’t seem to be your solution at all.
Do any of the various theories marketed as “timeless” here claim that the belief in a present moment is purely indexical—that is, a function of the randomly chosen observer-moment currently experienced as “me” being Yvain(2012) as opposed to Yvain(2013), in the same sense that seeing a quantum coin come up heads instead of tails is indexical? It seems like an elegant idea and would be relevant to this discussion.
Once you get resurrected, wont the mental program continue carrying business as usual and so wont the “me-ten-minutes-from-now” keep being there?
I don’t understand why this is downvoted