What is the purpose to making any sort of distinction between the identity of one person, and the identity of another?
Say you have a perfect copy of yourself excluding your spatial coordinates. You’re faced with a choice of terminating either yourself or your copy. How do you make that choice?
The intellectually honest answer to this question seems easy, but I’m inclined to believe that if you claim not having conflicting intuitions, you’re lying and/or signalling.
Out of the various people in the future who might or might not fall under the category of ‘yourself’, for which of them would you be willing to avoid eating a marshmallow now, so that those people could enjoy /two/ marshmallows?
Say you have a perfect copy of yourself excluding your spatial coordinates. You’re faced with a choice of terminating either yourself or your copy. How do you make that choice? The intellectually honest answer to this question seems easy, but I’m inclined to believe that if you claim not having conflicting intuitions, you’re lying and/or signalling.
Like a lot of the rarefied hypotheticals that come up here, I find that it helps clarify my thinking about this one to separate the epistemological confusion from the theoretical question.
That is… OK, say I (hereafter TheOtherDave, or TOD) have a perfect copy of myself (hereafter TheOtherOtherDave, or TOOD). If TOD is given a choice between (C1) terminating TOD + giving TOOD $N, and (C2) terminating TOOD, for what N (if any) does TOD choose C1? The “intellectually honest” answer is that this depends critically on TOD’s confidence that TOOD is a perfect copy of TOD.
But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar.
I don’t understand this question. Is it assuming some privileged hypothesis of how MWI works?
My understanding of the question does not depend on any MWI-theorizing.
I expect there to exist ~7B people in an hour, who might or might not qualify as “myself” (I expect one and only one of them to do so, though there’s a small non-zero chance that none will do so, and a much smaller chance that more than one will). Of that set, for which ones would I forego a marshmallow so they could have two? (The actual answer to that question is “almost all of them”; I don’t care for marshmallows and I far prefer the warm-fuzzy feeling of having been generous. I’d answer differently if you replaced the marshmallow with something I actually want.)
The “intellectually honest” answer is that this depends critically on TOD’s confidence that TOOD is a perfect copy of TOD.
This is not what I had in mind, I assumed the certainty is a given. I really need some kind of a tabooing software to remind me not to use value-laden expressions...
But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar.
This is what I meant by an intellectually honest answer, and I don’t disagree with it at all, if I look at it from a safe distance. If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you? If you don’t, should I believe you don’t, considering what’s typical for humans? What is TOD’s confidence that the problem of personal identity has been dissolved? Is it 1-minus-epsilon? Does 1$ represent this confidence also?
If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you?
You’re inviting me to imagine having 1-minus-epsilon confidence that this guy I’m looking at, TOOD, really is a perfect copy of me.
My first question is: how am I supposed to have arrived at that state? I can’t imagine it, personally. It seems utterly implausible… I can’t think of any amount of observation that would raise my confidence that high.
I haven’t given a huge amount of thought to this, but on casual thought I don’t think I can get above .8 confidence or so. Possibly not even close to that high.
But if I ignore all of that, and imagine as instructed that I really am that confident… somehow… then yeah, I expect that the evidentiary value of my intuitions/fears around siding with preserving TOD are sufficiently negligible that multiplied by the value of me they work out to less than a dollar.
should I believe you don’t, considering what’s typical for humans?
How confident do you think it’s reasonable to be of the typical behavior for a human in a situation that no human has ever actually found themselves in? How confident do you think it’s reasonable to be of the typical behavior for a human in a situation that I cannot imagine arriving at even a reasonable approximation of?
Implausible situations ought to produce implausible behavior.
What is TOD’s confidence that the problem of personal identity has been dissolved?
I am not sure enough of what this question means to essay an answer.
EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn’t some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity? If that’s what you mean, I’m not sure how confident I am of that, but I don’t think I care about X enough for it to affect my decisions either way. I wouldn’t pay you $10 to refrain from using your X-annihilator on me, either, if I were 1-epsilon confident that I would not change in any perceptible way after its use.
Well, it seems I’m utterly confused about subjective experience, even more so than I thought before. Thanks for calling my bs, again.
My first question is: how am I supposed to have arrived at that state? I can’t imagine it, personally. It seems utterly implausible… I can’t think of any amount of observation that would raise my confidence that high [...] Implausible situations ought to produce implausible behavior.
I can’t imagine it either. This could be an argument against thought experiments in general.
EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn’t some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity?
If I copied myself, I expect HR1 and HR2 would both think they’re the real HR1. HR1 wouldn’t have the subjective experience of HR2, and vice versa. Basically they cease to be copies when they start receiving different sensory information. For HR1, the decision to terminate his own subjective experience seems like suicide, and for HR2, termination of subjective experience seems like being murdered. I can’t wrap my head around this stuff, and I can’t even reliably pinpoint where my source of confusion lies. Thinking about TOD and TODD is much easier, since I haven’t experienced being either one, so they seem perfectly isomorphic to me.
It seems if you make a perfect physical copy, what makes your subjective experience personal should be part of it, since it must be physics, but I can’t imagine how copying it would be like. Will there be some kind of unified consciousness of two subjective experiences at once?
I’m not sure English is sufficient to convey my meaning, if you have no idea of what I’m talking about. In that case it’s probably better not to make this mess even worse.
Say you have a perfect copy of yourself excluding your spatial coordinates. You’re faced with a choice of terminating either yourself or your copy. How do you make that choice?
The intellectually honest answer to this question seems easy, but I’m inclined to believe that if you claim not having conflicting intuitions, you’re lying and/or signalling.
EDIT: umm, never mind.
Like a lot of the rarefied hypotheticals that come up here, I find that it helps clarify my thinking about this one to separate the epistemological confusion from the theoretical question.
That is… OK, say I (hereafter TheOtherDave, or TOD) have a perfect copy of myself (hereafter TheOtherOtherDave, or TOOD). If TOD is given a choice between (C1) terminating TOD + giving TOOD $N, and (C2) terminating TOOD, for what N (if any) does TOD choose C1? The “intellectually honest” answer is that this depends critically on TOD’s confidence that TOOD is a perfect copy of TOD.
But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar.
My understanding of the question does not depend on any MWI-theorizing.
I expect there to exist ~7B people in an hour, who might or might not qualify as “myself” (I expect one and only one of them to do so, though there’s a small non-zero chance that none will do so, and a much smaller chance that more than one will). Of that set, for which ones would I forego a marshmallow so they could have two? (The actual answer to that question is “almost all of them”; I don’t care for marshmallows and I far prefer the warm-fuzzy feeling of having been generous. I’d answer differently if you replaced the marshmallow with something I actually want.)
This is not what I had in mind, I assumed the certainty is a given. I really need some kind of a tabooing software to remind me not to use value-laden expressions...
This is what I meant by an intellectually honest answer, and I don’t disagree with it at all, if I look at it from a safe distance. If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you? If you don’t, should I believe you don’t, considering what’s typical for humans? What is TOD’s confidence that the problem of personal identity has been dissolved? Is it 1-minus-epsilon? Does 1$ represent this confidence also?
You’re inviting me to imagine having 1-minus-epsilon confidence that this guy I’m looking at, TOOD, really is a perfect copy of me.
My first question is: how am I supposed to have arrived at that state? I can’t imagine it, personally. It seems utterly implausible… I can’t think of any amount of observation that would raise my confidence that high.
I haven’t given a huge amount of thought to this, but on casual thought I don’t think I can get above .8 confidence or so. Possibly not even close to that high.
But if I ignore all of that, and imagine as instructed that I really am that confident… somehow… then yeah, I expect that the evidentiary value of my intuitions/fears around siding with preserving TOD are sufficiently negligible that multiplied by the value of me they work out to less than a dollar.
How confident do you think it’s reasonable to be of the typical behavior for a human in a situation that no human has ever actually found themselves in? How confident do you think it’s reasonable to be of the typical behavior for a human in a situation that I cannot imagine arriving at even a reasonable approximation of?
Implausible situations ought to produce implausible behavior.
I am not sure enough of what this question means to essay an answer.
EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn’t some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity? If that’s what you mean, I’m not sure how confident I am of that, but I don’t think I care about X enough for it to affect my decisions either way. I wouldn’t pay you $10 to refrain from using your X-annihilator on me, either, if I were 1-epsilon confident that I would not change in any perceptible way after its use.
Well, it seems I’m utterly confused about subjective experience, even more so than I thought before. Thanks for calling my bs, again.
I can’t imagine it either. This could be an argument against thought experiments in general.
If I copied myself, I expect HR1 and HR2 would both think they’re the real HR1. HR1 wouldn’t have the subjective experience of HR2, and vice versa. Basically they cease to be copies when they start receiving different sensory information. For HR1, the decision to terminate his own subjective experience seems like suicide, and for HR2, termination of subjective experience seems like being murdered. I can’t wrap my head around this stuff, and I can’t even reliably pinpoint where my source of confusion lies. Thinking about TOD and TODD is much easier, since I haven’t experienced being either one, so they seem perfectly isomorphic to me.
It seems if you make a perfect physical copy, what makes your subjective experience personal should be part of it, since it must be physics, but I can’t imagine how copying it would be like. Will there be some kind of unified consciousness of two subjective experiences at once?
I’m not sure English is sufficient to convey my meaning, if you have no idea of what I’m talking about. In that case it’s probably better not to make this mess even worse.