That’s fair enough. You got the point with your first comment, which was to point out that issues of memory-identity and continuous-experience-identity are / could be separate.
Perhaps I understand more than I think I do, then.
It seems to me that what I’m saying here is precisely that those issues can’t be separated, because they predict the same sets of observations. The world in which identity is a function of memory is in all observable ways indistinguishable from the world in which identity is a function of continuous experience. Or, for that matter, of cell lineages or geographical location or numerological equivalence.
And I’m saying that external observations are not all that matters. Indeed it feels odd to me to hold that view when the phenomenon under consideration is subjective experience itself.
You said “predict the same set of observations” which I implicitly took to mean “tell me something I can witness to update my beliefs about which theory is correct,” to which the answer is: there is nothing you—necessarily external—can witness to know whether my upload is death-and-creation or continuation. I alone am privy to that experience (continuation or oblivion), although the recorded memory is the same in either case so there’s no way clone could tell you after.
You could use a model of consciousness and a record of events to infer which outcome occurred. And that’s the root issue here, we have different models of consciousness and therefore make different inferences.
You keep insisting on inserting that “external” into my comment, just as if I had said it, when I didn’t. So let me back up a little and try to be clearer.
Suppose the future continuation you describe of my current self (let’s label him “D” for convenience) comes to exist in the year 2034.
Suppose D reads this exchange in the archives of LessWrong, and happens to idly wonder whether they, themselves, are in fact the same person who participated in LessWrong under the username TheOtherDave back in January of 2014, but subsequently went through the process you describe.
“Am I the same person as TheOtherDave?” D asks. “Is TheOtherDave having my experiences?”
What ought D expect to differentially observe if the answer were “yes” vs. “no”? This is not a question about external observations, as there’s no external observer to make any such observations. It’s simply a question about observations.
And as I said initially, it seems clear to me that no such differentially expected observations exist… not just no external observations, but no observations period. As you say, it’s just a question about models—specifically, what model of identity D uses.
Similarly, whether I expect to be the same person experiencing what D experiences is a question about what model of identity I use.
And if D and I disagree on the matter, neither of us is wrong, because it’s not the sort of question we can be wrong about. We’re “not even wrong,” as the saying goes. We simply have different models of identity, and there’s no actual territory for those models to refer to. There’s no fact of the matter.
Similarly, if I decide that you and I are really the same person, even though I know we don’t share any memories or physical cells or etc., because I have a model of identity that doesn’t depend on any of that stuff… well, I’m not even wrong about that.
When TheOtherDave walks into the destructive uploader, either he wakes up in a computer or he ceases to exist experiences no more. Not being able to experimentally determine what happened afterwards doesn’t change that fact that one of those descriptions matches what you experience and the other does not.
What do I experience in the first case that fails to match what I experience in the other?
That is, if TheOtherDave walks into the destructive uploader and X wakes up in a computer, how does X answer the question “Am I TheOtherDave?”
Again, I’m not talking about experimental determination. I’m talking about experience. You say that one description matches my experience and the other doesn’t… awesome! What experiences should I expect X to have in each case?
It sounds like your answer is that X will reliably have exactly the same experiences in each case, and so will every other experience-haver in the universe, but in one case they’re wrong and in the other they’re right.
Which, OK, if that’s your answer, I’ll drop the subject there, because you’re invoking an understanding of what it means to be wrong and right about which I am profoundly indifferent.
how does X answer the question “Am I TheOtherDave?”
This is so completely unrelated to what I am talking about. Completely out of left field. How the upload/clone answers or fails to answer the question “Am I TheOtherDave?” is irrelevant to the question at hand: what did TheOtherDave experience when he walked into the destructive uploader.
I’ve rephrased this as many times as I know how, but apparently I’m not getting through. I give up; this is my last reply.
That’s fair enough. You got the point with your first comment, which was to point out that issues of memory-identity and continuous-experience-identity are / could be separate.
Perhaps I understand more than I think I do, then.
It seems to me that what I’m saying here is precisely that those issues can’t be separated, because they predict the same sets of observations. The world in which identity is a function of memory is in all observable ways indistinguishable from the world in which identity is a function of continuous experience. Or, for that matter, of cell lineages or geographical location or numerological equivalence.
And I’m saying that external observations are not all that matters. Indeed it feels odd to me to hold that view when the phenomenon under consideration is subjective experience itself.
I didn’t say “external observations”.
I said “observations.”
If you engage with what I actually said, does it feel any less odd?
You said “predict the same set of observations” which I implicitly took to mean “tell me something I can witness to update my beliefs about which theory is correct,” to which the answer is: there is nothing you—necessarily external—can witness to know whether my upload is death-and-creation or continuation. I alone am privy to that experience (continuation or oblivion), although the recorded memory is the same in either case so there’s no way clone could tell you after.
You could use a model of consciousness and a record of events to infer which outcome occurred. And that’s the root issue here, we have different models of consciousness and therefore make different inferences.
You keep insisting on inserting that “external” into my comment, just as if I had said it, when I didn’t. So let me back up a little and try to be clearer.
Suppose the future continuation you describe of my current self (let’s label him “D” for convenience) comes to exist in the year 2034.
Suppose D reads this exchange in the archives of LessWrong, and happens to idly wonder whether they, themselves, are in fact the same person who participated in LessWrong under the username TheOtherDave back in January of 2014, but subsequently went through the process you describe.
“Am I the same person as TheOtherDave?” D asks. “Is TheOtherDave having my experiences?”
What ought D expect to differentially observe if the answer were “yes” vs. “no”? This is not a question about external observations, as there’s no external observer to make any such observations. It’s simply a question about observations.
And as I said initially, it seems clear to me that no such differentially expected observations exist… not just no external observations, but no observations period. As you say, it’s just a question about models—specifically, what model of identity D uses.
Similarly, whether I expect to be the same person experiencing what D experiences is a question about what model of identity I use.
And if D and I disagree on the matter, neither of us is wrong, because it’s not the sort of question we can be wrong about. We’re “not even wrong,” as the saying goes. We simply have different models of identity, and there’s no actual territory for those models to refer to. There’s no fact of the matter.
Similarly, if I decide that you and I are really the same person, even though I know we don’t share any memories or physical cells or etc., because I have a model of identity that doesn’t depend on any of that stuff… well, I’m not even wrong about that.
When TheOtherDave walks into the destructive uploader, either he wakes up in a computer or he ceases to exist experiences no more. Not being able to experimentally determine what happened afterwards doesn’t change that fact that one of those descriptions matches what you experience and the other does not.
What do I experience in the first case that fails to match what I experience in the other?
That is, if TheOtherDave walks into the destructive uploader and X wakes up in a computer, how does X answer the question “Am I TheOtherDave?”
Again, I’m not talking about experimental determination. I’m talking about experience. You say that one description matches my experience and the other doesn’t… awesome! What experiences should I expect X to have in each case?
It sounds like your answer is that X will reliably have exactly the same experiences in each case, and so will every other experience-haver in the universe, but in one case they’re wrong and in the other they’re right.
Which, OK, if that’s your answer, I’ll drop the subject there, because you’re invoking an understanding of what it means to be wrong and right about which I am profoundly indifferent.
Is that your answer?
This is so completely unrelated to what I am talking about. Completely out of left field. How the upload/clone answers or fails to answer the question “Am I TheOtherDave?” is irrelevant to the question at hand: what did TheOtherDave experience when he walked into the destructive uploader.
I’ve rephrased this as many times as I know how, but apparently I’m not getting through. I give up; this is my last reply.
OK.