How about the Sequences-favoured definition of identity that says that you should feel like sufficiently-fidelitous copies of you are actually the same you, rather than being near-twins who will thenceforth diverge? (As espoused in Timeless Identity and Identity Isn’t In Specific Atoms.) This has always struck me as severely counterintuitive, and its consistency doesn’t remedy that; if two instances can fork they will, barring future merges being a meaningful concept (something that I don’t think anything in the Sequences shows).
When you (or the sequences) say that two copies of me “should” feel the same to me, is that word “should” being used in a normative or a descriptive sense? What I mean is, am I being told that I “ought” to adopt this perspective that the other copy is me, or am I being told that I will naturally experience that other copy’s sensory input as if it were the input going into my original body?
Okay, thank you for clarifying that. In that case, though, I fail to see the support for that normative claim. Why “should” my copy of me feel like me (as if I have any control over that in the first place)? As far as I see it, even if my copy of me “should” feel like me in a normative sense, then that won’t matter in a descriptive sense because I have no way of affecting which copy of me I experience. Descriptively, I either experience one or the other, right? Either the teleporter is a suicide machine or it isn’t, right?
Things can be in a super-position of states while there is still uncertainty, but at some point it will come down to a test. Let’s say the teleporter makes a copy of me on Mars, but doesn’t destroy the original. Instead, scientists have to manually shoot the original a few minutes after the teleportation experiment. What do I experience? Do I immediately wake up on Mars? Do I wake up still on Earth and get shot, and my experience of anything ceases? Do I wake up on Earth and get shot, and then my subjective experiencing instantly “teleports” to the only copy of me left in the universe, and I wake up on Mars with no memory or knowledge that the other me just got shot? What do you predict that I will experience?
If it is possible for the copying to be nondestructive, then why make it destructive?
At the moment, all we get is multiple versions from the same base entity, separated by time but not space. So it’s frustrating that most of the thought experiments avoid the complement: multiple versions of the same base entity, separated by space but not time. The “Kill you to teleport a new you” scenario comes across as contrived. Let’s look at “scan you to teleport a new you, but no one dies”, and see what model of identity comes out.
Well, I’d expect that spatially-but-not-temporally-separated copies would lack the memory/body method of communication/continuity that temporally-but-not-spatially-separated instances have. They’d probably share the same sense of identity at the moment of copying, but would gradually diverge.
If it is objectively impossible to determine which is the original (for example, replication on the quantum level makes the idea of an original meaningless), that would differ from the version where a copy gets teleported to Mars, and thus the Martian copy knows it is a copy, and the original knows that it is the original. I don’t really know what to make of either scenario, only that I’d expect Martian Me to be kinda upset in the second, but still to prefer it to destructive copying.
In the case of non-destructive copying, which copy will I end up experiencing? If it is a 50⁄50 chance of experiencing either copy, then in cases where the copy would inhabit a more advantageous spatial location than the one I was currently in (such as, if I were stuck on Mars and wanted to go back to Earth), it would be in my interest to copy myself many many times via a Mars-Earth teleporter in order to give myself a good probability that I would end up back on Earth where I wanted to be.
Let’s say I valued being back home on Earth more than anything else, and I was willing to split whatever legal property I had back on Earth with 100 other copies of me. Then it would make sense for my original self on Mars to tell the scientists: “Copy me 100 times onto Earth. No more, no less, regardless of whatever I, the copy on Mars, say after this, and regardless of whatever the copies on Earth say after this.”
I would end up with a very high probability of experiencing one of those copies back on Earth. Of course, all of the copies on Earth would insist that THEY were the successful case of subjective teleportation and that no further teleportation would be required. But they would always say that, regardless of whether I was really one of those experiencing them. That is why I pre-committed to copying 100 times, even if the first copy reports, “Yay! The teleportation was a success! No need to make the other 99 copies!” Because at that point, there is still a 50% chance that I am still experiencing the copy back on Mars—too high for my tastes.
Likewise, the pre-commitment to copy myself no more than 100 times is important because you have to draw the line somewhere. If I had $100,000 in a bank account back on Earth, I’d like to start out with at least $1,000 of that. If you leave it up to the original Mars copy to decide, then the teleportation copying will go on forever. Even after the 100th copying (by which point I might have already been fortunate to get my subjective experience transferred onto maybe the 55th Earth copy or the 78th Earth copy or the 24th Earth copy or the 3rd Earth copy, who knows?), the copy on Mars will still insist, “No! no! no! The entire experiment has been a huge stroke of bad luck! 100 times I have tried to copy myself, and 100 times the coin has landed on tails, so to speak. We must make some more copies until my subjective experience gets transferred over!” At this point, the other copies would say, “That’s just what we would expect you to always say. You will never say that the experiment was a success. Very likely the original Matthew Opitz’s subjective experience got transferred over to one of us. Which one, nobody can tell from the outside by any experiment, as we will all claim to be that success. But the odds are in favor of one of us being the one that the original Matthew Opitz is subjectively experiencing right now, which is what he wanted all along when he set up this experiment. Sorry!”
But then, what if tails really had come up 100 times in a row? What if one’s subjective experience really was still attached to the Martian copy? Or what if this idea of a 50⁄50 chance is total bunk in the first place, and subjective experience simply cannot transfer to spatially-separated copies? That would suck.
What if, as the original you on Mars before any of the teleportation copying, you had a choice between using your $100,000 back on Earth to fund a physical rescue mission that would have a 10% chance of success, versus using that $100,000 back on Earth to fund a probe mission that would send a non-destructive teleportation machine to Mars that would make a copy of you back on Earth? If you believe that such an experiment would give you a 50⁄50 chance of waking up as the Earth copy, then it would make more sense to do that. However, if you believe that such an experiment would give you a 0% chance of waking up as the Earth copy, then it would make more sense just to do the physical rescue mission attempt.
These questions really do have practical significance. They are not just sophistry.
For the people for whom it does seem to make sense to identify with copies of themselves, do those people come to that conclusion because they anticipate being able to experience the input going into all of those copies somehow? Or is there some other reason that they use?
I don’t understand it. I hypothesise that they take on the idea (“you suggest I should think that? oh, OK”) and have a weak sense of self, but I don’t have much data to go on except handling emails from distressed Basilisk victims (who buy into this idea).
Interesting that you put it this way, rather than “should think that”. If indeed the Sequences say “should feel like”, I agree with them. But if they say we “should think that” the copies are the same you, that’s mistaken (because it either violates transitivity of identity, or explodes so many practices of re-identification that it would be much better to coin a new word).
A few words on “feel” and “like”: by “feel” I take it we mean both that one experiences certain emotions, and that one is generally motivated to protect and enhance the welfare of this person. By “like” we mean that the emotions and motivations are highly similar, and clearly cluster together with self-concern simpliciter, even if there are a few differences.
Fuzzy Pattern Identity agrees with the ideas put forward in the posts you link to.
It is counterintuitive, but our intuitions can be faulty, and on close inspection the other candidates (physical and causal continuity) for a useful definition of I break down at important edge cases.
Consider: Imagine you are about to be put into a cloning device which will destructively scan your current body and build create two perfect copies. Beforehand, both of the expected results of this procedure are reasonably referred to as “you”, just as you would normally refer to a version of yourself from a day in the future. Immediately after the procedure “you” share vastly more in common with your clone than past or future versions of your physical continuity, and your responses are more strongly entangled in a decision theoretic sense.
Yes, but this leads to trivially obvious problems like this one (aliens attempt blackmail by making and torturing thousands of copies of you). I submit that the proposed solution fails intuition badly enough and obviously enough that it would require removing people’s intuition to be acceptable to them, and you’re unlikely to swing this merely on the grounds of consistency. You’d need convincing, non-contrived real-life examples of why this is obviousy a superior solution as a practical philosophy.
That problem is not almost as strong with other humans being simulated, I’m not sure considering same pattern=same person makes it notably worse.
Additionally if I had strong reason to believe that my decision to surrender was not in some way entangled (even acausally) with their decision to mass-torture simulations, I may surrender in either decision, since I don’t see a strong reason to prefer the preferences of the real humans to the simulated ones in the least convenient possible world.
However, in general, it’s handy to have a pre-commitment to fighting back as strongly as possible on these kinds of blackmail situations, because it discourages use of extreme harm being used as leverage. If I think that my disposition to surrender would make those tactics more likely to have been used against me, that provides a basis to not surrender despite it being “better” in the current situation.
I don’t think it fails intuition quite as thoroughly as you’re suggesting, but I take the point that good examples of how it works would help. However, real-life examples are going to be very hard to come by since fuzzy pattern theory only works differently from other common identity theories in situations which are not yet technologically possible and/or involve looking at other everett branches. In every normal normal everyday scenario, it acts just like causal continuity, but unlike causal or physical continuity it does not fail the consistency test under the microscope (and, in my opinion, does less badly on intuition) when you extend it to handle important edge cases which may well be commonplace or at least possible in the future. The best I’ve done is link to things which show how other ways of thinking about identity fall apart, and that this way as far as I have been able to tell does not, but I’ll keep looking for better ways to show its usefulness.
I’ll note also that intuitively, the two instances of me will have more in common with each other than with me the day before … but they immediately diverge, and won’t remerge, so I think that each would intuit the other as its near-identical twin but nevertheless a different person, rather than the same “I”.
If remerging was a thing that could happen, that would I think break the intuition.
(I haven’t looked, but I would be surprised if this hadn’t been covered in the endless discussions on the subject on Extropians in the 1990s that Eliezer notes as the origin of his viewpoint.)
How about the Sequences-favoured definition of identity that says that you should feel like sufficiently-fidelitous copies of you are actually the same you, rather than being near-twins who will thenceforth diverge? (As espoused in Timeless Identity and Identity Isn’t In Specific Atoms.) This has always struck me as severely counterintuitive, and its consistency doesn’t remedy that; if two instances can fork they will, barring future merges being a meaningful concept (something that I don’t think anything in the Sequences shows).
When you (or the sequences) say that two copies of me “should” feel the same to me, is that word “should” being used in a normative or a descriptive sense? What I mean is, am I being told that I “ought” to adopt this perspective that the other copy is me, or am I being told that I will naturally experience that other copy’s sensory input as if it were the input going into my original body?
It reads to me like an “ought”.
Okay, thank you for clarifying that. In that case, though, I fail to see the support for that normative claim. Why “should” my copy of me feel like me (as if I have any control over that in the first place)? As far as I see it, even if my copy of me “should” feel like me in a normative sense, then that won’t matter in a descriptive sense because I have no way of affecting which copy of me I experience. Descriptively, I either experience one or the other, right? Either the teleporter is a suicide machine or it isn’t, right?
Things can be in a super-position of states while there is still uncertainty, but at some point it will come down to a test. Let’s say the teleporter makes a copy of me on Mars, but doesn’t destroy the original. Instead, scientists have to manually shoot the original a few minutes after the teleportation experiment. What do I experience? Do I immediately wake up on Mars? Do I wake up still on Earth and get shot, and my experience of anything ceases? Do I wake up on Earth and get shot, and then my subjective experiencing instantly “teleports” to the only copy of me left in the universe, and I wake up on Mars with no memory or knowledge that the other me just got shot? What do you predict that I will experience?
If it is possible for the copying to be nondestructive, then why make it destructive?
At the moment, all we get is multiple versions from the same base entity, separated by time but not space. So it’s frustrating that most of the thought experiments avoid the complement: multiple versions of the same base entity, separated by space but not time. The “Kill you to teleport a new you” scenario comes across as contrived. Let’s look at “scan you to teleport a new you, but no one dies”, and see what model of identity comes out.
Well, I’d expect that spatially-but-not-temporally-separated copies would lack the memory/body method of communication/continuity that temporally-but-not-spatially-separated instances have. They’d probably share the same sense of identity at the moment of copying, but would gradually diverge.
If it is objectively impossible to determine which is the original (for example, replication on the quantum level makes the idea of an original meaningless), that would differ from the version where a copy gets teleported to Mars, and thus the Martian copy knows it is a copy, and the original knows that it is the original. I don’t really know what to make of either scenario, only that I’d expect Martian Me to be kinda upset in the second, but still to prefer it to destructive copying.
In the case of non-destructive copying, which copy will I end up experiencing? If it is a 50⁄50 chance of experiencing either copy, then in cases where the copy would inhabit a more advantageous spatial location than the one I was currently in (such as, if I were stuck on Mars and wanted to go back to Earth), it would be in my interest to copy myself many many times via a Mars-Earth teleporter in order to give myself a good probability that I would end up back on Earth where I wanted to be.
Let’s say I valued being back home on Earth more than anything else, and I was willing to split whatever legal property I had back on Earth with 100 other copies of me. Then it would make sense for my original self on Mars to tell the scientists: “Copy me 100 times onto Earth. No more, no less, regardless of whatever I, the copy on Mars, say after this, and regardless of whatever the copies on Earth say after this.”
I would end up with a very high probability of experiencing one of those copies back on Earth. Of course, all of the copies on Earth would insist that THEY were the successful case of subjective teleportation and that no further teleportation would be required. But they would always say that, regardless of whether I was really one of those experiencing them. That is why I pre-committed to copying 100 times, even if the first copy reports, “Yay! The teleportation was a success! No need to make the other 99 copies!” Because at that point, there is still a 50% chance that I am still experiencing the copy back on Mars—too high for my tastes.
Likewise, the pre-commitment to copy myself no more than 100 times is important because you have to draw the line somewhere. If I had $100,000 in a bank account back on Earth, I’d like to start out with at least $1,000 of that. If you leave it up to the original Mars copy to decide, then the teleportation copying will go on forever. Even after the 100th copying (by which point I might have already been fortunate to get my subjective experience transferred onto maybe the 55th Earth copy or the 78th Earth copy or the 24th Earth copy or the 3rd Earth copy, who knows?), the copy on Mars will still insist, “No! no! no! The entire experiment has been a huge stroke of bad luck! 100 times I have tried to copy myself, and 100 times the coin has landed on tails, so to speak. We must make some more copies until my subjective experience gets transferred over!” At this point, the other copies would say, “That’s just what we would expect you to always say. You will never say that the experiment was a success. Very likely the original Matthew Opitz’s subjective experience got transferred over to one of us. Which one, nobody can tell from the outside by any experiment, as we will all claim to be that success. But the odds are in favor of one of us being the one that the original Matthew Opitz is subjectively experiencing right now, which is what he wanted all along when he set up this experiment. Sorry!”
But then, what if tails really had come up 100 times in a row? What if one’s subjective experience really was still attached to the Martian copy? Or what if this idea of a 50⁄50 chance is total bunk in the first place, and subjective experience simply cannot transfer to spatially-separated copies? That would suck.
What if, as the original you on Mars before any of the teleportation copying, you had a choice between using your $100,000 back on Earth to fund a physical rescue mission that would have a 10% chance of success, versus using that $100,000 back on Earth to fund a probe mission that would send a non-destructive teleportation machine to Mars that would make a copy of you back on Earth? If you believe that such an experiment would give you a 50⁄50 chance of waking up as the Earth copy, then it would make more sense to do that. However, if you believe that such an experiment would give you a 0% chance of waking up as the Earth copy, then it would make more sense just to do the physical rescue mission attempt.
These questions really do have practical significance. They are not just sophistry.
Yeah, it doesn’t work for me either. But apparently there are people for whom it does.
For the people for whom it does seem to make sense to identify with copies of themselves, do those people come to that conclusion because they anticipate being able to experience the input going into all of those copies somehow? Or is there some other reason that they use?
I don’t understand it. I hypothesise that they take on the idea (“you suggest I should think that? oh, OK”) and have a weak sense of self, but I don’t have much data to go on except handling emails from distressed Basilisk victims (who buy into this idea).
Interesting that you put it this way, rather than “should think that”. If indeed the Sequences say “should feel like”, I agree with them. But if they say we “should think that” the copies are the same you, that’s mistaken (because it either violates transitivity of identity, or explodes so many practices of re-identification that it would be much better to coin a new word).
A few words on “feel” and “like”: by “feel” I take it we mean both that one experiences certain emotions, and that one is generally motivated to protect and enhance the welfare of this person. By “like” we mean that the emotions and motivations are highly similar, and clearly cluster together with self-concern simpliciter, even if there are a few differences.
Fuzzy Pattern Identity agrees with the ideas put forward in the posts you link to.
It is counterintuitive, but our intuitions can be faulty, and on close inspection the other candidates (physical and causal continuity) for a useful definition of I break down at important edge cases.
Consider: Imagine you are about to be put into a cloning device which will destructively scan your current body and build create two perfect copies. Beforehand, both of the expected results of this procedure are reasonably referred to as “you”, just as you would normally refer to a version of yourself from a day in the future. Immediately after the procedure “you” share vastly more in common with your clone than past or future versions of your physical continuity, and your responses are more strongly entangled in a decision theoretic sense.
Yes, but this leads to trivially obvious problems like this one (aliens attempt blackmail by making and torturing thousands of copies of you). I submit that the proposed solution fails intuition badly enough and obviously enough that it would require removing people’s intuition to be acceptable to them, and you’re unlikely to swing this merely on the grounds of consistency. You’d need convincing, non-contrived real-life examples of why this is obviousy a superior solution as a practical philosophy.
That problem is not almost as strong with other humans being simulated, I’m not sure considering same pattern=same person makes it notably worse.
Additionally if I had strong reason to believe that my decision to surrender was not in some way entangled (even acausally) with their decision to mass-torture simulations, I may surrender in either decision, since I don’t see a strong reason to prefer the preferences of the real humans to the simulated ones in the least convenient possible world.
However, in general, it’s handy to have a pre-commitment to fighting back as strongly as possible on these kinds of blackmail situations, because it discourages use of extreme harm being used as leverage. If I think that my disposition to surrender would make those tactics more likely to have been used against me, that provides a basis to not surrender despite it being “better” in the current situation.
I don’t think it fails intuition quite as thoroughly as you’re suggesting, but I take the point that good examples of how it works would help. However, real-life examples are going to be very hard to come by since fuzzy pattern theory only works differently from other common identity theories in situations which are not yet technologically possible and/or involve looking at other everett branches. In every normal normal everyday scenario, it acts just like causal continuity, but unlike causal or physical continuity it does not fail the consistency test under the microscope (and, in my opinion, does less badly on intuition) when you extend it to handle important edge cases which may well be commonplace or at least possible in the future. The best I’ve done is link to things which show how other ways of thinking about identity fall apart, and that this way as far as I have been able to tell does not, but I’ll keep looking for better ways to show its usefulness.
I’ll note also that intuitively, the two instances of me will have more in common with each other than with me the day before … but they immediately diverge, and won’t remerge, so I think that each would intuit the other as its near-identical twin but nevertheless a different person, rather than the same “I”.
If remerging was a thing that could happen, that would I think break the intuition.
(I haven’t looked, but I would be surprised if this hadn’t been covered in the endless discussions on the subject on Extropians in the 1990s that Eliezer notes as the origin of his viewpoint.)