You seem to be objecting that this is an unfair thought experiment because humans were not designed to contemplate these extreme cases.
But that’s precisely the point! These extreme cases might not have been present in our ancestral environment. They might not be present now. But there is a decent chance that they are coming...that, someday, we will be literally offered this choice or something analogous to it by a superintelligence AI who, even if friendly, honestly just wants to ascertain our preferences. Perhaps the superintelligent AI can create a utopia for us, but during the week in which it is being constructed by nano-robots, the Earth’s surface will be scoured to bits and resemble a living hell. Would we still want it?
That’s why this post poses a good, relevant question. And I see that most people seem to just want to squirm in their seats and complain about the tough question rather than answer it.
Me, I would take option 2, assuming that the billion dollars I would get afterwards would enable more than a week of bliss of a magnitude in the positive direction equal to or greater than the magnitude of suffering I would experience for that horrible week.
Plus, no matter how back that first week of torture is, I will know in the back of my head during all of that that I can look forward to a billion dollars at the end of it. Now, if part of the torture involves temporarily deleting my memory of having made the deal and making me confused about why I am being tortured and how long it will last (possibly forever), it would make me think a bit harder about the deal, but I would still take option 2.
Perhaps the superintelligent AI can create a utopia for us, but during the week in which it is being constructed by nano-robots, the Earth’s surface will be scoured to bits and resemble a living hell.
That a very different scenario then the one proposed. In that scenario the AI can simply put all humans for the week in a coma.
Plus, no matter how back that first week of torture is, I will know in the back of my head during all of that that I can look forward to a billion dollars at the end of it.
I don’t think you understand the concept of maximum negative utility. There are certainly ways to make you believe that you would suffer the state for ever.
Whether or not the you that get’s tortured is actually getting a billion dollar also depends a lot of your definition of personal identity.
The only way of getting through the inherent issues of the question would be to say that they “you” that gets the billion dollar is an exact copy of the you before the week of torture and not an extension of the person that’s through the week of experiencing torture.
The only way of getting through the inherent issues of the question would be to say that they “you” that gets the billion dollar is an exact copy of the you before the week of torture and not an extension of the person that’s through the week of experiencing torture.
Exactly. Whether I’m prepared to sacrifice the copy of me that suffers thru the week of hell depends on the amount of empathy I have for the other me. Or how my ultility claculation comes out. Like in the Branches of the tree of time where the protagonists sacrifice their clones from differnt timelines to rescue the important timeline.
I think if we rephrase the scenario to be slightly more plausible and familiar, it will become clearer to people:
Imagine that some eccentric millionaire approaches you with the following deal: she will give you a million dollars if:
You agree to go to a dentist and undergo a root canal operation without anesthesia (nevermind the fact that you probably don’t need a root canal), BUT:
You DO get to have a heaping dose of Versed, which, while it won’t dull the pain during the operation, will prevent you from remembering anything about it after the fact.
Would you take the million dollars and do the operation? I would!
Now, as to the question of whether the person undergoing the root canal operation is the real me, I would say, YES! I will experience it. Now, is the copy of the pre-operation me that gets restored after the operation also me? I say YES! I will also experience that body.
Ultimately, the deciding factor for me ends up being the fact that the root canal will only take an hour or two of extreme pain, but the million dollars will bring me enjoyment for far longer. The fact that I won’t remember the root canal operation does nothing to influence my estimation of how bad the root canal operation will be. The fact that I won’t remember the root canal operation only changes my estimation of how pleasant the post-root canal experience will be (because I will know that I won’t be haunted by nightmares of root canal pain while I am enjoying my million dollars).
Even though the memory of the root canal operation will cease to exist for me at some point, the experience still factors into my overall calculations of utility. It’s just that normal events that we remember factor into our calculations of utility in two discrete terms: how nice/bad they are in the moment + how nice/bad their memory after-effects are. In the case of amnesia, you are just lopping off the right hand side of that sum.
Would you take the million dollars and do the operation?
Sure. But that is strictly a different—and much less extreme—though experiment than the former.
I wouldn’t treat the narcotized me as a different person. There is a lot of continuity in this szenario. And a lot of real-life risks and consequences too.
Okay...but why wouldn’t you treat the narcotized you as a different person, but you would treat the memory-erased you as a different person in the other scenario? Is this not being inconsistent? The same thing is being done in both scenarios—your memory is being erased—but by different means.
I am not just my memory. My identity and being is closely entangled with my life, my environment, the people I interact with, the physical effects on me and my body. Everything that has an effect on my future self basically (weighted by causal distance or something).
And the scenarios differ in that: The branch Omega tortures has practically no causal effect on my future self after the week (except my absence for a week, which is a comparatively small effect given that many people leave their home for muh longer times without much effect).
The millionaire on the other hand may plausibly have a much more stronger effect. At least that is how I model it. Part of it surely is that the Omega example is constructed to have no other effects—thus I assume all other lfe-entangement-effects to be small. Whereas for the millionaire example all my caveats regarding humans proposing deals apply.
You seem to be objecting that this is an unfair thought experiment because humans were not designed to contemplate these extreme cases.
You got me wrong. I’m not objecting. The though experiment is a valid and interesting one. It’s just that the answers it elicidates fall into a certain class of problems which I pointed out.
You seem to be objecting that this is an unfair thought experiment because humans were not designed to contemplate these extreme cases.
But that’s precisely the point! These extreme cases might not have been present in our ancestral environment. They might not be present now. But there is a decent chance that they are coming...that, someday, we will be literally offered this choice or something analogous to it by a superintelligence AI who, even if friendly, honestly just wants to ascertain our preferences. Perhaps the superintelligent AI can create a utopia for us, but during the week in which it is being constructed by nano-robots, the Earth’s surface will be scoured to bits and resemble a living hell. Would we still want it?
That’s why this post poses a good, relevant question. And I see that most people seem to just want to squirm in their seats and complain about the tough question rather than answer it.
Me, I would take option 2, assuming that the billion dollars I would get afterwards would enable more than a week of bliss of a magnitude in the positive direction equal to or greater than the magnitude of suffering I would experience for that horrible week.
Plus, no matter how back that first week of torture is, I will know in the back of my head during all of that that I can look forward to a billion dollars at the end of it. Now, if part of the torture involves temporarily deleting my memory of having made the deal and making me confused about why I am being tortured and how long it will last (possibly forever), it would make me think a bit harder about the deal, but I would still take option 2.
That a very different scenario then the one proposed. In that scenario the AI can simply put all humans for the week in a coma.
I don’t think you understand the concept of maximum negative utility. There are certainly ways to make you believe that you would suffer the state for ever.
Whether or not the you that get’s tortured is actually getting a billion dollar also depends a lot of your definition of personal identity. The only way of getting through the inherent issues of the question would be to say that they “you” that gets the billion dollar is an exact copy of the you before the week of torture and not an extension of the person that’s through the week of experiencing torture.
Exactly. Whether I’m prepared to sacrifice the copy of me that suffers thru the week of hell depends on the amount of empathy I have for the other me. Or how my ultility claculation comes out. Like in the Branches of the tree of time where the protagonists sacrifice their clones from differnt timelines to rescue the important timeline.
I think if we rephrase the scenario to be slightly more plausible and familiar, it will become clearer to people:
Imagine that some eccentric millionaire approaches you with the following deal: she will give you a million dollars if: You agree to go to a dentist and undergo a root canal operation without anesthesia (nevermind the fact that you probably don’t need a root canal), BUT: You DO get to have a heaping dose of Versed, which, while it won’t dull the pain during the operation, will prevent you from remembering anything about it after the fact.
Would you take the million dollars and do the operation? I would!
Now, as to the question of whether the person undergoing the root canal operation is the real me, I would say, YES! I will experience it. Now, is the copy of the pre-operation me that gets restored after the operation also me? I say YES! I will also experience that body.
Ultimately, the deciding factor for me ends up being the fact that the root canal will only take an hour or two of extreme pain, but the million dollars will bring me enjoyment for far longer. The fact that I won’t remember the root canal operation does nothing to influence my estimation of how bad the root canal operation will be. The fact that I won’t remember the root canal operation only changes my estimation of how pleasant the post-root canal experience will be (because I will know that I won’t be haunted by nightmares of root canal pain while I am enjoying my million dollars).
Even though the memory of the root canal operation will cease to exist for me at some point, the experience still factors into my overall calculations of utility. It’s just that normal events that we remember factor into our calculations of utility in two discrete terms: how nice/bad they are in the moment + how nice/bad their memory after-effects are. In the case of amnesia, you are just lopping off the right hand side of that sum.
Sure. But that is strictly a different—and much less extreme—though experiment than the former. I wouldn’t treat the narcotized me as a different person. There is a lot of continuity in this szenario. And a lot of real-life risks and consequences too.
Okay...but why wouldn’t you treat the narcotized you as a different person, but you would treat the memory-erased you as a different person in the other scenario? Is this not being inconsistent? The same thing is being done in both scenarios—your memory is being erased—but by different means.
I am not just my memory. My identity and being is closely entangled with my life, my environment, the people I interact with, the physical effects on me and my body. Everything that has an effect on my future self basically (weighted by causal distance or something).
And the scenarios differ in that: The branch Omega tortures has practically no causal effect on my future self after the week (except my absence for a week, which is a comparatively small effect given that many people leave their home for muh longer times without much effect).
The millionaire on the other hand may plausibly have a much more stronger effect. At least that is how I model it. Part of it surely is that the Omega example is constructed to have no other effects—thus I assume all other lfe-entangement-effects to be small. Whereas for the millionaire example all my caveats regarding humans proposing deals apply.
You got me wrong. I’m not objecting. The though experiment is a valid and interesting one. It’s just that the answers it elicidates fall into a certain class of problems which I pointed out.