Fuzzy Pattern Identity agrees with the ideas put forward in the posts you link to.
It is counterintuitive, but our intuitions can be faulty, and on close inspection the other candidates (physical and causal continuity) for a useful definition of I break down at important edge cases.
Consider: Imagine you are about to be put into a cloning device which will destructively scan your current body and build create two perfect copies. Beforehand, both of the expected results of this procedure are reasonably referred to as “you”, just as you would normally refer to a version of yourself from a day in the future. Immediately after the procedure “you” share vastly more in common with your clone than past or future versions of your physical continuity, and your responses are more strongly entangled in a decision theoretic sense.
Yes, but this leads to trivially obvious problems like this one (aliens attempt blackmail by making and torturing thousands of copies of you). I submit that the proposed solution fails intuition badly enough and obviously enough that it would require removing people’s intuition to be acceptable to them, and you’re unlikely to swing this merely on the grounds of consistency. You’d need convincing, non-contrived real-life examples of why this is obviousy a superior solution as a practical philosophy.
That problem is not almost as strong with other humans being simulated, I’m not sure considering same pattern=same person makes it notably worse.
Additionally if I had strong reason to believe that my decision to surrender was not in some way entangled (even acausally) with their decision to mass-torture simulations, I may surrender in either decision, since I don’t see a strong reason to prefer the preferences of the real humans to the simulated ones in the least convenient possible world.
However, in general, it’s handy to have a pre-commitment to fighting back as strongly as possible on these kinds of blackmail situations, because it discourages use of extreme harm being used as leverage. If I think that my disposition to surrender would make those tactics more likely to have been used against me, that provides a basis to not surrender despite it being “better” in the current situation.
I don’t think it fails intuition quite as thoroughly as you’re suggesting, but I take the point that good examples of how it works would help. However, real-life examples are going to be very hard to come by since fuzzy pattern theory only works differently from other common identity theories in situations which are not yet technologically possible and/or involve looking at other everett branches. In every normal normal everyday scenario, it acts just like causal continuity, but unlike causal or physical continuity it does not fail the consistency test under the microscope (and, in my opinion, does less badly on intuition) when you extend it to handle important edge cases which may well be commonplace or at least possible in the future. The best I’ve done is link to things which show how other ways of thinking about identity fall apart, and that this way as far as I have been able to tell does not, but I’ll keep looking for better ways to show its usefulness.
I’ll note also that intuitively, the two instances of me will have more in common with each other than with me the day before … but they immediately diverge, and won’t remerge, so I think that each would intuit the other as its near-identical twin but nevertheless a different person, rather than the same “I”.
If remerging was a thing that could happen, that would I think break the intuition.
(I haven’t looked, but I would be surprised if this hadn’t been covered in the endless discussions on the subject on Extropians in the 1990s that Eliezer notes as the origin of his viewpoint.)
Fuzzy Pattern Identity agrees with the ideas put forward in the posts you link to.
It is counterintuitive, but our intuitions can be faulty, and on close inspection the other candidates (physical and causal continuity) for a useful definition of I break down at important edge cases.
Consider: Imagine you are about to be put into a cloning device which will destructively scan your current body and build create two perfect copies. Beforehand, both of the expected results of this procedure are reasonably referred to as “you”, just as you would normally refer to a version of yourself from a day in the future. Immediately after the procedure “you” share vastly more in common with your clone than past or future versions of your physical continuity, and your responses are more strongly entangled in a decision theoretic sense.
Yes, but this leads to trivially obvious problems like this one (aliens attempt blackmail by making and torturing thousands of copies of you). I submit that the proposed solution fails intuition badly enough and obviously enough that it would require removing people’s intuition to be acceptable to them, and you’re unlikely to swing this merely on the grounds of consistency. You’d need convincing, non-contrived real-life examples of why this is obviousy a superior solution as a practical philosophy.
That problem is not almost as strong with other humans being simulated, I’m not sure considering same pattern=same person makes it notably worse.
Additionally if I had strong reason to believe that my decision to surrender was not in some way entangled (even acausally) with their decision to mass-torture simulations, I may surrender in either decision, since I don’t see a strong reason to prefer the preferences of the real humans to the simulated ones in the least convenient possible world.
However, in general, it’s handy to have a pre-commitment to fighting back as strongly as possible on these kinds of blackmail situations, because it discourages use of extreme harm being used as leverage. If I think that my disposition to surrender would make those tactics more likely to have been used against me, that provides a basis to not surrender despite it being “better” in the current situation.
I don’t think it fails intuition quite as thoroughly as you’re suggesting, but I take the point that good examples of how it works would help. However, real-life examples are going to be very hard to come by since fuzzy pattern theory only works differently from other common identity theories in situations which are not yet technologically possible and/or involve looking at other everett branches. In every normal normal everyday scenario, it acts just like causal continuity, but unlike causal or physical continuity it does not fail the consistency test under the microscope (and, in my opinion, does less badly on intuition) when you extend it to handle important edge cases which may well be commonplace or at least possible in the future. The best I’ve done is link to things which show how other ways of thinking about identity fall apart, and that this way as far as I have been able to tell does not, but I’ll keep looking for better ways to show its usefulness.
I’ll note also that intuitively, the two instances of me will have more in common with each other than with me the day before … but they immediately diverge, and won’t remerge, so I think that each would intuit the other as its near-identical twin but nevertheless a different person, rather than the same “I”.
If remerging was a thing that could happen, that would I think break the intuition.
(I haven’t looked, but I would be surprised if this hadn’t been covered in the endless discussions on the subject on Extropians in the 1990s that Eliezer notes as the origin of his viewpoint.)