My first thought (in response to the second question) is ‘immediately terminate myself, leaving the copy as the only valid continuation of my identity’.
Of course, it is questionable whether I would have the willpower to go through with it. I believe that my copy’s mind would constitute just as ‘real’ a continuation of my consciousness as would my own mind following a procedure that removed the memories of the past few days (or however long since the split) whilst leaving all else intact (which is of course just a contrived-for-the-sake-of-the-thought-experiment variety of the sort of forgetting that we undergo all the time), but I have trouble alieving it.
Kind of. I wouldn’t defect against my copy without his consent, but I would want the pool trimmed down to only a single version of myself (ideally whichever one had the highest expected future utility, all else equal). The copy, being a copy, should want the same thing. The only time I wouldn’t be opposed to the existence of multiple instances of myself would be if those instances could regularly synchronize their memories and experiences (and thus constitute more a single distributed entity with mere synchronization delays than multiple diverging entities).
Really? Can you say a little more about why you think you have that value? I guess I’m not convinced that it’s really a terminal value if it varies so widely across people of otherwise similar beliefs. Presumably that’s what lalartu meant as well, but I just don’t get it. I like myself, so I’d like more of myself in the world!
I think a big part of it is that I don’t really care about other people except instrumentally. I care terminally about myself, but only because I experience my own thoughts and feelings first-hand. If I knew I were going to be branched, then I’d care about both copies in advance as both are valid continuations of my current sensory stream. However, once the branch had taken place, both copies would immediately stop caring about the other (although I expect they would still practice altruistic behavior towards each other for decision-theoretic reasons). I suspect this has also influenced my sense of morality: I’ve never been attracted to total utilitarianism, as I’ve never been able to see why the existence of X people should be considered superior to the existence of Y < X equally satisfied people.
So yeah, that’s part of it, but not all of it (if that were the extent of it, I’d be indifferent to the existence of copies, not opposed to it). The rest is hard to put into words, and I suspect that even were I to succeed in doing so I’d only have succeeded in manufacturing a verbal rationalization. Part of it is instrumental, each copy would be a potential competitor, but that’s insufficient to explain my feelings on the matter. This wouldn’t be applicable to, say, the Many-Worlds Interpretation of quantum mechanics, and yet I’m still bothered by that interpretation as it implies constant branching of my identity. So in the end, I think that I can’t offer a verbal justification for this preference precisely because it’s a terminal preference.
What would your copy want?
What if it was a near-copy without $fatalMedicalCondition?
My first thought (in response to the second question) is ‘immediately terminate myself, leaving the copy as the only valid continuation of my identity’.
Of course, it is questionable whether I would have the willpower to go through with it. I believe that my copy’s mind would constitute just as ‘real’ a continuation of my consciousness as would my own mind following a procedure that removed the memories of the past few days (or however long since the split) whilst leaving all else intact (which is of course just a contrived-for-the-sake-of-the-thought-experiment variety of the sort of forgetting that we undergo all the time), but I have trouble alieving it.
This is a lot more interesting a response if you would also agree with Lalartu in the more general case.
Kind of. I wouldn’t defect against my copy without his consent, but I would want the pool trimmed down to only a single version of myself (ideally whichever one had the highest expected future utility, all else equal). The copy, being a copy, should want the same thing. The only time I wouldn’t be opposed to the existence of multiple instances of myself would be if those instances could regularly synchronize their memories and experiences (and thus constitute more a single distributed entity with mere synchronization delays than multiple diverging entities).
Why would you want to actively avoid having a copy?
Because I terminally value the uniqueness of my identity.
Really? Can you say a little more about why you think you have that value? I guess I’m not convinced that it’s really a terminal value if it varies so widely across people of otherwise similar beliefs. Presumably that’s what lalartu meant as well, but I just don’t get it. I like myself, so I’d like more of myself in the world!
I think a big part of it is that I don’t really care about other people except instrumentally. I care terminally about myself, but only because I experience my own thoughts and feelings first-hand. If I knew I were going to be branched, then I’d care about both copies in advance as both are valid continuations of my current sensory stream. However, once the branch had taken place, both copies would immediately stop caring about the other (although I expect they would still practice altruistic behavior towards each other for decision-theoretic reasons). I suspect this has also influenced my sense of morality: I’ve never been attracted to total utilitarianism, as I’ve never been able to see why the existence of X people should be considered superior to the existence of Y < X equally satisfied people.
So yeah, that’s part of it, but not all of it (if that were the extent of it, I’d be indifferent to the existence of copies, not opposed to it). The rest is hard to put into words, and I suspect that even were I to succeed in doing so I’d only have succeeded in manufacturing a verbal rationalization. Part of it is instrumental, each copy would be a potential competitor, but that’s insufficient to explain my feelings on the matter. This wouldn’t be applicable to, say, the Many-Worlds Interpretation of quantum mechanics, and yet I’m still bothered by that interpretation as it implies constant branching of my identity. So in the end, I think that I can’t offer a verbal justification for this preference precisely because it’s a terminal preference.