Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins.
Though their memories are very limited they are legally different people.
My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral.
Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.
Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
That’s a point of philosophical disagreement between us. Here’s why:
Take an individual.
Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.
You create a clone of that person.
Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.
Now let’s say technology exists to transfer memories and mind states.
After you create the clone-that-is-not-you you then put your memories into it.
If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
Here’s why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
The uniqueness of experience is important.
this follows: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral. Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.