I’d be interested in whether you still disagree and why.
I’m still trying to figure out what does and does not matter to my concept of continuity and why.
Let me ask you this: which person from five seconds in the future do you care more about protecting: yourself or a random person similar to you?
Is there some theoretical threshold of similarity between yourself and another person beyond which you will no longer be sure which location will be occupied by the brain-state your current brain-state will evolve to in the next second?
Let me ask you this: which person from five seconds in the future do you care more about protecting: yourself or a random person similar to you?
All else, equal, I think I prefer my own future self. It depends how similar, though. If this “random” person were in fact more similar to myself than I currently am, I’d prefer them. As for what I mean by “more similar to myself than I currently am”, I’m just leaving open the possibility that there are things that make my default future self not the optimum in terms of continuity. For example, what if this other person remembers things that I have forgotten?
The way I think of this, there isn’t really a fundamental concept of continuity, and the question really is “What kind of processes do I want the universe to be turned into?” There’s no fundamental concept of “me”, it’s just that I prefer the future to contain people who have property X, and Y, as opposed to property W and Z.
Likewise, there is no fundamental concept of “person” apart from the structure of our preferences over what gets done with this computational substrate we have.
Is there some theoretical threshold of similarity between yourself and another person beyond which you will no longer be sure which location will be occupied by the brain-state your current brain-state will evolve to in the next second?
I’m not sure that makes sense. I certainly have intuitions about “continuity”, but they might be broken in cases like that. For a question like that, I think talking about continuity has to be replaced with the more general question of what I want the future to look like.
(A future in which there are two of me? Yes please. As for which is the “real me”, who cares?)
I’m still trying to figure out what does and does not matter to my concept of continuity and why.
Let me ask you this: which person from five seconds in the future do you care more about protecting: yourself or a random person similar to you?
Is there some theoretical threshold of similarity between yourself and another person beyond which you will no longer be sure which location will be occupied by the brain-state your current brain-state will evolve to in the next second?
All else, equal, I think I prefer my own future self. It depends how similar, though. If this “random” person were in fact more similar to myself than I currently am, I’d prefer them. As for what I mean by “more similar to myself than I currently am”, I’m just leaving open the possibility that there are things that make my default future self not the optimum in terms of continuity. For example, what if this other person remembers things that I have forgotten?
The way I think of this, there isn’t really a fundamental concept of continuity, and the question really is “What kind of processes do I want the universe to be turned into?” There’s no fundamental concept of “me”, it’s just that I prefer the future to contain people who have property X, and Y, as opposed to property W and Z.
Likewise, there is no fundamental concept of “person” apart from the structure of our preferences over what gets done with this computational substrate we have.
I’m not sure that makes sense. I certainly have intuitions about “continuity”, but they might be broken in cases like that. For a question like that, I think talking about continuity has to be replaced with the more general question of what I want the future to look like.
(A future in which there are two of me? Yes please. As for which is the “real me”, who cares?)