Er, the second option is the one where I kill the clone. In both options, only you remain alive after 10 hours, no clone.
How about this cleaner version: I create a clone of you (no choice here). Then I torture you for an hour, OR the clone for ten hours, after which you’re both free to go.
How about this: choose between 59 minutes of torture for you and 10 hours for the clone, vs 1 hour for you and the clone, with the experience for both you and the clone being indistinguishable for the first 59 minutes.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
In your new formulation, I’d choose the 1 hour for both of us and we (both copies of me) would both expect it to be over soon at the 59th minute. My copies and I would be in agreement in our expectations of each other’s behavior.
I identify closely with anything sufficiently similar to me—including close past and future versions of me. For instance, if there was a copy of me made an hour ago (whether or not the copy had runtime during that hour), and he or I were given the choice during your test, we would choose the same thing, as mentioned above.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
It’s true that both the original, and the clone, don’t know if they’re the clone or not at minute 59. But the original, who really made the decision before the clone was created, correctly optimized his own future experience to have 59 minutes of torture instead of an hour. The original doesn’t care about the clone’s experiences. (That is, I wouldn’t care.)
I changed my mind since then. So I would make different decisions now… more in line with what others here have been proposing.
I would try to optimize for all projected future clones. But in a scenario where I know some clones are going to die no matter what they do (your previous question), I would partially discount the experiences such clones have before they die and try to optimize more for the experiences of the survivor. That’s just my personal preference: the lifelong memory of the survivor matters more than the precise terminal existence of the killed clone.
Regarding your new questions about anticipation, under the new theory that has no concept of personal continuity, there doesn’t seem to be such a thing as personal survival where duplication&termination are involved.
...I see. That doesn’t change my answer, as it happens; my clone dies, yes, and you bear moral culpability for it, but it is better for one to make it out (relatively) unscathed than for the only survivor to be traumatized. In the new version, I would prefer there be only one hour of torture between us, and accept the first option.
See, the thing is: in my utility function, I don’t have a special ranking for “my” experiences over everyone else’s. When I do the math, I come out paying a lot more attention to my own situation for purely pragmatic reasons.
Even given your theory and your utility function, I don’t see how your clone’s 10 hours of torture and subsequent death would leave you traumatized. Isn’t it best for the survivor to have experienced no torture at all (so we’d torture the clone)?
Also, regarding Scenario B: imagine that I decided to get in on it, only with a variation. Instead of duplicating you and offering those two options—either your original be tortured for an hour or your duplicate be tortured for ten—I created an entirely new individual, no more similar to you than any other human being, and gave you the choice between being tortured yourself for an hour and the new guy being tortured for ten.
Which would you choose? Me, I think it’s perfectly obvious that it is better for less torture to occur.
Better for whom? To me it’s perfectly obvious that it’s better for me to have someone else tortured, and I would choose that. I would only choose to be hurt myself if the tradeoff was very unequal (a speck in the eye for me, ten hours of torture for him), and even then I would soon stop agreeing to be hurt if I had to face such a choice repeatedly.
If someone were to mount a campaign to stop your entire torturing project, and if I could participate by being hurt (but not by endangering my life), then I would agree to pay a much higher price. But that’s because such participation is a form of social capital and also helps enforce social norms elsewhere (this is both an evolutionary and a personal reason).
I’m sorry—in Scenario A (I suffer torture, or duplicate suffers more torture and is killed), I would choose the second option for essentially the reasons you propose. In Scenario B (I’m duplicated, then either I suffer torture or duplicate suffers more torture, then we both live), I would choose the first option because that’s less torture. I don’t see the complexity.
Making sure I was understood correctly: after the clone is killed (which happens in both scenarios), the only identifiable “you” is the survivor. Therefore any considerations of lasting trauma should apply to the survivor, or not all. So to minimize trauma (without minimizing the amount of torture), we should ensure that the survivor—who is known even before the clone is killed—is the one who is not tortured.
I was under the impression that the clone survived in the second scenario—that that was the difference between the two scenarios. This might explain some confusion about my answers, if this was a confusion.
No, I wouldn’t. I’d choose the second option so as to prevent my torture from being compounded with my total death.
Er, the second option is the one where I kill the clone. In both options, only you remain alive after 10 hours, no clone.
How about this cleaner version: I create a clone of you (no choice here). Then I torture you for an hour, OR the clone for ten hours, after which you’re both free to go.
I’d choose one hour I think.
How about this: choose between 59 minutes of torture for you and 10 hours for the clone, vs 1 hour for you and the clone, with the experience for both you and the clone being indistinguishable for the first 59 minutes.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
I’d choose one hour also.
In your new formulation, I’d choose the 1 hour for both of us and we (both copies of me) would both expect it to be over soon at the 59th minute. My copies and I would be in agreement in our expectations of each other’s behavior.
I identify closely with anything sufficiently similar to me—including close past and future versions of me. For instance, if there was a copy of me made an hour ago (whether or not the copy had runtime during that hour), and he or I were given the choice during your test, we would choose the same thing, as mentioned above.
It’s true that both the original, and the clone, don’t know if they’re the clone or not at minute 59. But the original, who really made the decision before the clone was created, correctly optimized his own future experience to have 59 minutes of torture instead of an hour. The original doesn’t care about the clone’s experiences. (That is, I wouldn’t care.)
Sounds consistent. Forgive me if I probe a bit further: I’m not trying to be rude, I’m interested in the boundaries of your theory.
In an unconsciousness—clone—destroy original brain—wake scenario, do you anticipate surviving ?
In an unconsciousness—clone twice—destroy original brain—wake scenario, do you identify with / anticipate being zero, one or two of the clones ?
I changed my mind since then. So I would make different decisions now… more in line with what others here have been proposing.
I would try to optimize for all projected future clones. But in a scenario where I know some clones are going to die no matter what they do (your previous question), I would partially discount the experiences such clones have before they die and try to optimize more for the experiences of the survivor. That’s just my personal preference: the lifelong memory of the survivor matters more than the precise terminal existence of the killed clone.
Regarding your new questions about anticipation, under the new theory that has no concept of personal continuity, there doesn’t seem to be such a thing as personal survival where duplication&termination are involved.
...I see. That doesn’t change my answer, as it happens; my clone dies, yes, and you bear moral culpability for it, but it is better for one to make it out (relatively) unscathed than for the only survivor to be traumatized. In the new version, I would prefer there be only one hour of torture between us, and accept the first option.
See, the thing is: in my utility function, I don’t have a special ranking for “my” experiences over everyone else’s. When I do the math, I come out paying a lot more attention to my own situation for purely pragmatic reasons.
Even given your theory and your utility function, I don’t see how your clone’s 10 hours of torture and subsequent death would leave you traumatized. Isn’t it best for the survivor to have experienced no torture at all (so we’d torture the clone)?
Also, regarding Scenario B: imagine that I decided to get in on it, only with a variation. Instead of duplicating you and offering those two options—either your original be tortured for an hour or your duplicate be tortured for ten—I created an entirely new individual, no more similar to you than any other human being, and gave you the choice between being tortured yourself for an hour and the new guy being tortured for ten.
Which would you choose? Me, I think it’s perfectly obvious that it is better for less torture to occur.
Better for whom? To me it’s perfectly obvious that it’s better for me to have someone else tortured, and I would choose that. I would only choose to be hurt myself if the tradeoff was very unequal (a speck in the eye for me, ten hours of torture for him), and even then I would soon stop agreeing to be hurt if I had to face such a choice repeatedly.
If someone were to mount a campaign to stop your entire torturing project, and if I could participate by being hurt (but not by endangering my life), then I would agree to pay a much higher price. But that’s because such participation is a form of social capital and also helps enforce social norms elsewhere (this is both an evolutionary and a personal reason).
I’m sorry—in Scenario A (I suffer torture, or duplicate suffers more torture and is killed), I would choose the second option for essentially the reasons you propose. In Scenario B (I’m duplicated, then either I suffer torture or duplicate suffers more torture, then we both live), I would choose the first option because that’s less torture. I don’t see the complexity.
Making sure I was understood correctly: after the clone is killed (which happens in both scenarios), the only identifiable “you” is the survivor. Therefore any considerations of lasting trauma should apply to the survivor, or not all. So to minimize trauma (without minimizing the amount of torture), we should ensure that the survivor—who is known even before the clone is killed—is the one who is not tortured.
I was under the impression that the clone survived in the second scenario—that that was the difference between the two scenarios. This might explain some confusion about my answers, if this was a confusion.
No, the scenarios I originally proposed were:
A clone is made.
2a. If you choose, you’re tortured for an hour.
2b. Or if you so choose, the clone is tortured for ten hours.
3, Ten hours from now, the clone is destroyed in any case.