Thank you for the story. It succinctly describes my stance on identity, and similarly describes my frustration with people who do not understand the lessons in the story.
1) Who cares if it’s a wind-up toy or not, if it provides indistinguishable outputs for a given set of inputs? Does it really matter if the result of a mathematical calculation is computed on an abacus, a handheld calculator, in neural wetware, or on a supercomputer?
2) Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you’re reduced to an unthinking blob due to massive brain damage, is that still you? It’s up to you to decide where you draw the line, so long as you recognize that you’re putting it in an arbitrary place determined by you, and that other people may decide to put it elsewhere.
A good set of thought experiments that helped me work through this is to imagine that you have a magical box you can step into that will create a perfect copy of you. Said box will also magically destroy copies that enter it and press the ‘destruct’ button.
What mindset would you need to have to be able to properly use the box?
Under what circumstances would be able to create a copy, then enter the box and press the destruct button yourself?
Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you’re reduced to an unthinking blob due to massive brain damage, is that still you?
Personally, I have trouble accepting that I’m still the same “me” that went to bed last night, when I wake up in the morning.
Wheras I’m the same ’”me” that I was a year ago. The “me” of five and ten years ago are farther from that, while the “me” I was at age 10 is probably not very close at all. I’d allow a pretty big amount of slop to exist in different copies of myself.
...a magical box you can step into that will create a perfect copy of you. Said box will also magically destroy copies that enter it and press the ‘destruct’ button.
This thought always gets me thinking. When I come across variations of the above thought experiment it makes me wonder if a magical box is even necessary. Are copies of me being destroyed as I type? Haven’t I died an infinite number of deaths from the time I started typing till now? Couldn’t me hitting the return key at the end of this sentence be sufficient to replicate the copy/kill box a la MWI?
I am having a hard time distinguishing what MWI says about my death at branch points, and simultaneously copy/kill yourself in a copy machine.
I think having an explicit box, which allows for two or more simultaneous copies of you to exist and look at each other, is pretty important. Just being created and destroyed in the normal course of things, when everything looks normal, doesn’t have the same impact.
My interpretation is that MWI says precisely nothing about you at branch points, because you don’t die there—or rather, I don’t necessarily consider a single branch point change to be sufficient to make me not ‘me’. Further, creating a copy, or multiple copies, doesn’t mean anything died in my view.
For me, whether or not I’m me is an arbitrary line in the sand, a function of the mental and physical ‘distance’ or difference between copies. I think that’s part of the point of the story—which version of the daughter is the daughter? Which one is close enough? You can’t get it exact, so draw a line in the sand somewhere, according to your personal preferences and/or utility functions.
My line is apparently pretty unusual. I’m not sure I can define exactly where it is, but I can give you some use cases that are in the ‘clear and obvious’ category. Understand that the below is predicated on 1) I have extremely high belief that the box is creating ‘good enough’ copies and will not fail, and 2) the box has a failsafe that prevents me from destroying the last copy, if only one copy exists, and 3) it’s better if there’s a small number of copies, from a resource conservation standpoint.
I step in the box and create another copy. I lose a coin toss, which means I get to do the bills and take out the trash, wheras the copy continues gets to do interesting work that is expected to be of value in the long run. In this case, I do the bills and take out the trash, then return to the box and destroy myself.
In the above situation, I win the coin toss and begin doing interesting work. Later, my copy returns and tells me that he witnessed a spectactular car crash and rushed to the scene to aid people and probably saved somebody’s life. His accumulated experience exceeds what I gained from my work, so I write down or tell him the most critical insights I uncovered, then return to the box and destroy myself.
I step into the box and create a copy. One of us wins the coin toss and begins a major fork: the winner will dedicate the next ten years to music and performance. In a year, the two of us meet and discuss things. We’ve both had incredible experiences, but they’re not really comparable. Neither of us is willing to step into the box to terminate, and neither asks the other to do so.
Upon losing a coin toss, I take a trip to a third world country and am imprisoned unfairly and indefinitely for reasons beyond my control. The cost, time, and effort to fix the situation is prohibitive, and I do not have access to a destruction box. If possible, I communicate my status to my other copies, then commit suicide using whatever means necessary.
There are much more questionable cases between these, where the question of which one to destroy ends up weighting one against the other as best I can—but frankly if I had said box, I’d be very careful and strict about it, so as to keep the situations as clear as possible.
You sir, have a very strange sense of identity. I’m not sure I’d give my copy anything more than the time of day. And I certainly don’t extend self-preservation to be inclusive of him. I’m not even going to touch the suicide. A line of thinking which leads you to suicide should be raising all sorts of red flags, IMHO.
Imagine that you’re a program, and creating a new copy of you is as simple as invoking fork().
Voluntarily stepping into the box is no different than suicide, and frankly if you’re resource constrained, it’s a better option than murdering a copy. IMHO, you shouldn’t be allowed to make copies of yourself unless you’re willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
I think you misunderstand me. It doesn’t matter how easy it is to do, if you’re a program. I wouldn’t step into the box any more than I would commit suicide, and either one would be tantamount to murder.
I guess parents should be ready to kill themselves when their kids reach 18, to make sure there’s room for them in the world? No, that’s a terrible line of reasoning.
The fact that you considered that parent/kid question to be a valid argument, indicates strongly to me that you don’t have the mindset or understanding to make copies safely.
IMHO, you shouldn’t be allowed to make copies of yourself unless you’re willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
Sexual reproduction is a form of reproduction. Anyone who is a parent knows that children are a limited means of carrying identity in the form of drives, goals, likes & dislikes, etc. in to the future, even if vicariously (both because of your influence on them, and their influence on you). If inputs/outputs are all that matter in determining identity, then identity is a fuzzy concept and a continuous scale, as we are all constantly changing. Your children carry on some part of your personal identity, even if in nothing but their internal simulations of you. The same arguments apply.
If we’re going to talk about societal proscriptions, then I would say those who think their sentient creations should be prepared to commit suicide for any reason are the ones who shouldn’t be dabbling in creation...
There is no such thing as a perfect copy. That’s what the OP is about! Even if there were some sort of magical philosophy box that made perfect replicas, you would cease to be perfect copies of each other as soon as you exited the box and started receiving different percepts—you would become different physical sentient entities leading separate lives. If you want to believe that these two clones are in fact the same identity, then you have to provide a specific reason—for example: related histories, similarity of behavior, motivation & drives, etc. Furthermore it would have to be a fuzzy comparison because as soon as you exit the box you start to diverge. How much change does it take until you can no longer claim that you and your clone are the same person? A week? A year? One hundred years? At that point you and your clone will have lived move time separately than your shared history. Do you still have the right to claim the other as a direct extension of yourself? What if a million years pass? I am quite confident that in a million years, you will have less in common with your clone than you currently do with your own children (assuming you have children).
So no, it’s not a strawman. It’s a direct conclusion from where your reasoning leads. And when a line of reasoning leads to absurd outcomes, it’s often time to revisit the underlying assumptions.
This looks like an argument for extreme time preference, not an argument against copies. Why identify with one million-years-later version of yourself and exclude the other, unless we beg the question?
That’s what I’m saying. I myself wouldn’t identify with any of the copies, no mater how near or distant. My clone and I have a lot in common, but were are separate sentient beings (hence: requesting suicide of the other is tantamount to murder). But if you do identify with clones (as in: they are you, not merely other beings that are similar to you), then at some point you and they must cross the line of divergence where they no longer are identifiable, or else the argument reduces to absurdity. Where is that line? I see no non-arbitrary way of defining it.
EDIT: which led me to suspect that other than intuition I have no reason to think that my clone and I share the same identity, which led me to consider other models for consciousness and identity. My terseness isn’t just because of the moral repugnance of asking others to suicide, but also because this is an old, already hashed argument. I first encountered it in philosophy class 10+ years ago. If there is a formal response to the reduction to absurdity I gave (which doesn’t also throw out consciousness entirely), I have yet to see it.
Maybe you already got this part, but time preference is orthogonal to copies vs originals.
Eliezer says he defines personal identity in part by causal connections, which exist between you and the “clone” as well as between you and your “original” in the future. This definition also suggests a hole in your argument for strong time preference.
You are misreading me. I don’t have time preference. If an exact perfect replica of me were made, it would not be me even at the moment of duplication.
I have continuation-of-computation preference. This is much stricter than Eliezer’s causal connection based identity, but also avoids many weird predictions which arise from that.
And yes, you would need a bright line in this case. Fuzziness is in the map, not the territory on this item.
Thank you for the story. It succinctly describes my stance on identity, and similarly describes my frustration with people who do not understand the lessons in the story.
1) Who cares if it’s a wind-up toy or not, if it provides indistinguishable outputs for a given set of inputs? Does it really matter if the result of a mathematical calculation is computed on an abacus, a handheld calculator, in neural wetware, or on a supercomputer?
2) Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you’re reduced to an unthinking blob due to massive brain damage, is that still you? It’s up to you to decide where you draw the line, so long as you recognize that you’re putting it in an arbitrary place determined by you, and that other people may decide to put it elsewhere.
A good set of thought experiments that helped me work through this is to imagine that you have a magical box you can step into that will create a perfect copy of you. Said box will also magically destroy copies that enter it and press the ‘destruct’ button.
What mindset would you need to have to be able to properly use the box?
Under what circumstances would be able to create a copy, then enter the box and press the destruct button yourself?
Personally, I have trouble accepting that I’m still the same “me” that went to bed last night, when I wake up in the morning.
I suspect you act quite differently towards your future self compared to other people who will wake up tomorrow morning.
Wheras I’m the same ’”me” that I was a year ago. The “me” of five and ten years ago are farther from that, while the “me” I was at age 10 is probably not very close at all. I’d allow a pretty big amount of slop to exist in different copies of myself.
This thought always gets me thinking. When I come across variations of the above thought experiment it makes me wonder if a magical box is even necessary. Are copies of me being destroyed as I type? Haven’t I died an infinite number of deaths from the time I started typing till now? Couldn’t me hitting the return key at the end of this sentence be sufficient to replicate the copy/kill box a la MWI?
I am having a hard time distinguishing what MWI says about my death at branch points, and simultaneously copy/kill yourself in a copy machine.
Was that also your point or am I mistaken?
I think having an explicit box, which allows for two or more simultaneous copies of you to exist and look at each other, is pretty important. Just being created and destroyed in the normal course of things, when everything looks normal, doesn’t have the same impact.
My interpretation is that MWI says precisely nothing about you at branch points, because you don’t die there—or rather, I don’t necessarily consider a single branch point change to be sufficient to make me not ‘me’. Further, creating a copy, or multiple copies, doesn’t mean anything died in my view.
Where do you draw the line as in not caring about destroying yourself versus your copy? How did you make that decision?
For me, whether or not I’m me is an arbitrary line in the sand, a function of the mental and physical ‘distance’ or difference between copies. I think that’s part of the point of the story—which version of the daughter is the daughter? Which one is close enough? You can’t get it exact, so draw a line in the sand somewhere, according to your personal preferences and/or utility functions.
My line is apparently pretty unusual. I’m not sure I can define exactly where it is, but I can give you some use cases that are in the ‘clear and obvious’ category. Understand that the below is predicated on 1) I have extremely high belief that the box is creating ‘good enough’ copies and will not fail, and 2) the box has a failsafe that prevents me from destroying the last copy, if only one copy exists, and 3) it’s better if there’s a small number of copies, from a resource conservation standpoint.
I step in the box and create another copy. I lose a coin toss, which means I get to do the bills and take out the trash, wheras the copy continues gets to do interesting work that is expected to be of value in the long run. In this case, I do the bills and take out the trash, then return to the box and destroy myself.
In the above situation, I win the coin toss and begin doing interesting work. Later, my copy returns and tells me that he witnessed a spectactular car crash and rushed to the scene to aid people and probably saved somebody’s life. His accumulated experience exceeds what I gained from my work, so I write down or tell him the most critical insights I uncovered, then return to the box and destroy myself.
I step into the box and create a copy. One of us wins the coin toss and begins a major fork: the winner will dedicate the next ten years to music and performance. In a year, the two of us meet and discuss things. We’ve both had incredible experiences, but they’re not really comparable. Neither of us is willing to step into the box to terminate, and neither asks the other to do so.
Upon losing a coin toss, I take a trip to a third world country and am imprisoned unfairly and indefinitely for reasons beyond my control. The cost, time, and effort to fix the situation is prohibitive, and I do not have access to a destruction box. If possible, I communicate my status to my other copies, then commit suicide using whatever means necessary.
There are much more questionable cases between these, where the question of which one to destroy ends up weighting one against the other as best I can—but frankly if I had said box, I’d be very careful and strict about it, so as to keep the situations as clear as possible.
You sir, have a very strange sense of identity. I’m not sure I’d give my copy anything more than the time of day. And I certainly don’t extend self-preservation to be inclusive of him. I’m not even going to touch the suicide. A line of thinking which leads you to suicide should be raising all sorts of red flags, IMHO.
Imagine that you’re a program, and creating a new copy of you is as simple as invoking fork().
Voluntarily stepping into the box is no different than suicide, and frankly if you’re resource constrained, it’s a better option than murdering a copy. IMHO, you shouldn’t be allowed to make copies of yourself unless you’re willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
I think you misunderstand me. It doesn’t matter how easy it is to do, if you’re a program. I wouldn’t step into the box any more than I would commit suicide, and either one would be tantamount to murder.
I guess parents should be ready to kill themselves when their kids reach 18, to make sure there’s room for them in the world? No, that’s a terrible line of reasoning.
The fact that you considered that parent/kid question to be a valid argument, indicates strongly to me that you don’t have the mindset or understanding to make copies safely.
How does it not follow from what you said?
Sexual reproduction is a form of reproduction. Anyone who is a parent knows that children are a limited means of carrying identity in the form of drives, goals, likes & dislikes, etc. in to the future, even if vicariously (both because of your influence on them, and their influence on you). If inputs/outputs are all that matter in determining identity, then identity is a fuzzy concept and a continuous scale, as we are all constantly changing. Your children carry on some part of your personal identity, even if in nothing but their internal simulations of you. The same arguments apply.
If we’re going to talk about societal proscriptions, then I would say those who think their sentient creations should be prepared to commit suicide for any reason are the ones who shouldn’t be dabbling in creation...
Yes, sexual reproduction is a form of reproduction, one which we were explicitly not talking about. We were talking about perfect copies.
You may continue beating at the straw man if you wish, but don’t expect me to respond.
There is no such thing as a perfect copy. That’s what the OP is about! Even if there were some sort of magical philosophy box that made perfect replicas, you would cease to be perfect copies of each other as soon as you exited the box and started receiving different percepts—you would become different physical sentient entities leading separate lives. If you want to believe that these two clones are in fact the same identity, then you have to provide a specific reason—for example: related histories, similarity of behavior, motivation & drives, etc. Furthermore it would have to be a fuzzy comparison because as soon as you exit the box you start to diverge. How much change does it take until you can no longer claim that you and your clone are the same person? A week? A year? One hundred years? At that point you and your clone will have lived move time separately than your shared history. Do you still have the right to claim the other as a direct extension of yourself? What if a million years pass? I am quite confident that in a million years, you will have less in common with your clone than you currently do with your own children (assuming you have children).
So no, it’s not a strawman. It’s a direct conclusion from where your reasoning leads. And when a line of reasoning leads to absurd outcomes, it’s often time to revisit the underlying assumptions.
This looks like an argument for extreme time preference, not an argument against copies. Why identify with one million-years-later version of yourself and exclude the other, unless we beg the question?
That’s what I’m saying. I myself wouldn’t identify with any of the copies, no mater how near or distant. My clone and I have a lot in common, but were are separate sentient beings (hence: requesting suicide of the other is tantamount to murder). But if you do identify with clones (as in: they are you, not merely other beings that are similar to you), then at some point you and they must cross the line of divergence where they no longer are identifiable, or else the argument reduces to absurdity. Where is that line? I see no non-arbitrary way of defining it.
EDIT: which led me to suspect that other than intuition I have no reason to think that my clone and I share the same identity, which led me to consider other models for consciousness and identity. My terseness isn’t just because of the moral repugnance of asking others to suicide, but also because this is an old, already hashed argument. I first encountered it in philosophy class 10+ years ago. If there is a formal response to the reduction to absurdity I gave (which doesn’t also throw out consciousness entirely), I have yet to see it.
We certainly don’t need a bright line.
Maybe you already got this part, but time preference is orthogonal to copies vs originals.
Eliezer says he defines personal identity in part by causal connections, which exist between you and the “clone” as well as between you and your “original” in the future. This definition also suggests a hole in your argument for strong time preference.
You are misreading me. I don’t have time preference. If an exact perfect replica of me were made, it would not be me even at the moment of duplication.
I have continuation-of-computation preference. This is much stricter than Eliezer’s causal connection based identity, but also avoids many weird predictions which arise from that.
And yes, you would need a bright line in this case. Fuzziness is in the map, not the territory on this item.