I think this discussion is focusing on what other’s would behave towards me, and derive what ought to be regarded as my future self from there. That is certainly a valid discussion to be had. However my post is taking about a different (thought related) topic.
For example, if I for whatever crazy reason thinks that me from tomorrow:—the one with (largely) the same physical body and no trick on memory whatsoever— not my future self. Then I would do a bunch of irresponsible things that would lead to others’ dislike or hostility toward me that could eventually lead to my demise. But so what? If I regard that as a different person, then to hell with him. The current me wouldn’t even care. So being detrimental to that future person would not compel current me to regard him as my future self.
Luckily we do not behave that way. Everyone, rational ones at least, considers the person with the same physical body and memories of the current self as themself in the future. That is the survival instinct, that is the consensus.
But that consensus is about an idiosyncratic situation: where memory(experience) and physical body are bound together. Take that away, and we no longer have a clear, unequivocal basis to start a logical discussion. Someone’s basis could be the survival of the same physical body would not step into the teletranporter, even if it is greatly convenient and could benefit the one steps out of it. Someone else could start from a different basis. They may believe that only patterns matters. So mind uploading into a silicon machine to make easy copies at the cost of adversely affecting the carbon body would be welcomed. None of these positions could be rebutted by the cost/benefit analysis of some future minds. Because they may or may not care about those minds, at different levels, in the first place.
Sure, logic is not entirely irrelevant. It comes into play after you pick the basis of your decision. But values, instead of logic, largely determines the answer to the question.
It’s probably relevant to think about why we tend to value our future selves in the first place. I think it’s that each of us has memories (and the resulting habits) of thinking “wow, past self really screwed me over. I hate that. I think I’ll not screw future self over so that doesn’t happen again”. We care because there’s a future self that will hate us if we do, and we can imagine it very vividly. In addition, there’s an unspoken cultural assusmption that it’s logical to care about our future selves.
I included some of how other people regard our identity, but that’s not my point. My point is that, for almost any reason whatsoever you could come up with to value your physically continuous future self, you’d also value a physically discontinuous future self that maintains the same mind-pattern. That’s except for deciding “I no longer care about anything that teleports”, which is possible and consistent, but no more sensible than stopping caring about anything wearing blue hats.
So sure, people aren’t necessarily logically wrong if they value their physically continuous future self over a perfect clone (or upload). But they probably are making a logic error, if they have even modestly consistent values.
I think this discussion is focusing on what other’s would behave towards me, and derive what ought to be regarded as my future self from there. That is certainly a valid discussion to be had. However my post is taking about a different (thought related) topic.
For example, if I for whatever crazy reason thinks that me from tomorrow:—the one with (largely) the same physical body and no trick on memory whatsoever— not my future self. Then I would do a bunch of irresponsible things that would lead to others’ dislike or hostility toward me that could eventually lead to my demise. But so what? If I regard that as a different person, then to hell with him. The current me wouldn’t even care. So being detrimental to that future person would not compel current me to regard him as my future self.
Luckily we do not behave that way. Everyone, rational ones at least, considers the person with the same physical body and memories of the current self as themself in the future. That is the survival instinct, that is the consensus.
But that consensus is about an idiosyncratic situation: where memory(experience) and physical body are bound together. Take that away, and we no longer have a clear, unequivocal basis to start a logical discussion. Someone’s basis could be the survival of the same physical body would not step into the teletranporter, even if it is greatly convenient and could benefit the one steps out of it. Someone else could start from a different basis. They may believe that only patterns matters. So mind uploading into a silicon machine to make easy copies at the cost of adversely affecting the carbon body would be welcomed. None of these positions could be rebutted by the cost/benefit analysis of some future minds. Because they may or may not care about those minds, at different levels, in the first place.
Sure, logic is not entirely irrelevant. It comes into play after you pick the basis of your decision. But values, instead of logic, largely determines the answer to the question.
I agree with all of that.
It’s probably relevant to think about why we tend to value our future selves in the first place. I think it’s that each of us has memories (and the resulting habits) of thinking “wow, past self really screwed me over. I hate that. I think I’ll not screw future self over so that doesn’t happen again”. We care because there’s a future self that will hate us if we do, and we can imagine it very vividly. In addition, there’s an unspoken cultural assusmption that it’s logical to care about our future selves.
I included some of how other people regard our identity, but that’s not my point. My point is that, for almost any reason whatsoever you could come up with to value your physically continuous future self, you’d also value a physically discontinuous future self that maintains the same mind-pattern. That’s except for deciding “I no longer care about anything that teleports”, which is possible and consistent, but no more sensible than stopping caring about anything wearing blue hats.
So sure, people aren’t necessarily logically wrong if they value their physically continuous future self over a perfect clone (or upload). But they probably are making a logic error, if they have even modestly consistent values.