When I see people agreeing to 1000 years of agony, I wonder if they are allowing themselves to fully conceptualize that—if they have a real memory of prolonged agony that they can call up. I call up my memories of childbirth. I would do just about anything to avoid 1000 years of labor, and that’s not even “the most intense agony” possible.
People undergoing torture beg for death. Knowing that I would beg for death in that situation, I choose death ahead of time. If someone else made the decision for me to press the button, and I was the uploaded consciousness learning about what had happened, I would be horrified and devastated to think of myself undergoing that torture.
In fact I would die to prevent 1000 years of torture for anyone, not just myself.
Well, speaking only for myself, it’s clear that I’m not allowing myself to fully conceptualize the costs of a millenium of torture, even if I were able to, which I don’t think I actually am.
But it’s also clear that I’m not allowing myself, and am probably unable, to fully conceptualize the benefits of an immortal enjoyable life.
To put this more generally: my preference is to avoid cost/benefit tradeoffs where I can neither fully appreciate the costs nor the benefits. But, that said, I’m not sure that being able to appreciate the costs, but not the benefits, is an improvement.
Leaving all that aside, though… I suspect that, like you, I would choose to die (A) rather than choosing a long period of torture for someone else (B) if A and B were my choices. Of course, I don’t know for sure, and I hope never to find out.
But I also suspect that if I found myself in the position of already having chosen B, or of benefiting from that choice made on my behalf, I would avoid awareness of having made/benefited the choice… that is, I suspect I am not one of those who actually walks away from Omelas.
Extreme pain induces perhaps the strongest emotional bias imaginable; aside from simple sadism, that’s the main reason why torture has historically been practiced. Knowing this, I’d hesitate to give much weight to my assumed preferences under torture.
More generally, wishing for death under extreme emotional stress does not imply a volitional desire for death: that’s the assumption behind suicide hotlines, etc., and I’ve seen no particular evidence that it’s a mistaken one.
I would press the button if I estimated that the uploaded copy of me stood to gain more from the upload than the copy of me being uploaded would lose in the pain of the process. The answer to that question isn’t explicitly given as a premise, but with “effective immortality” to play with, it seems reasonable to assume that it’s positive.
Knowing this, I’d hesitate to give much weight to my assumed preferences under torture.
You’re not asking yourself whether you’d like to die, all things being equal. You’re asking yourself whether you’d prefer death to the torture. And you would—the “you” that was undergoing the torture, would.
If you could gain immortality by subjecting someone else, a stranger from a different country, someone you’ve never met, to a thousand years of torture—would you do it?
I would prefer immediate death to a thousand subjective years of agony followed by death, if that were all that was at stake. And I’m pretty sure that after, say, a subjective year of agony, I’d happily kill off an otherwise immortal copy of me to spare myself the remaining 999 years.
But I estimate that to be true because of the emotional stress associated with extreme pain, not because of any sober cost/benefit analysis. Absent that stress, I quickly conclude that any given moment of agony for Gest(a) is worth an arbitrary but much larger number of pain-free moments of continued existence for Gest(b); and given that I’d consider both branches “me” relative to my current self, I don’t find myself with any particular reason to privilege one over the other.
People undergoing torture will betray their strongest beliefs, even sell out their friends into precisely the same fate, to get the pain to stop. Well-known. But I don’t think this reflects their true volition for any reasonable value of “true”: only a product of extreme circumstances which they’d certainly regret later, given the opportunity.
If you could gain immortality by subjecting someone else, a stranger from a different country, someone you’ve never met, to a thousand years of torture—would you do it?
This introduces issues of coercion which don’t exist in the original problem, so I don’t believe it’s a fair analogy—but my answer is “only with their consent”. I would consider consenting to the reverse under some circumstances, and I’d be more likely to consent if the world was about to end as per the OP. I’d immediately regret it, of course, but for the reasons given above I don’t believe that should influence my decision.
I’m just not sure about the way you’re discounting the preferences of Nornagest(torture1000). In my imagination he’s there, screaming for the pain to stop, screaming that he takes it back, he takes it back, he takes it back. His preferences are so diametrically opposed to “yours” (the Nornagest making this decision now) that I almost question your right to make this decision for him.
Well, I actually do believe that Gest(torture1000)’s preferences are consistent with my current ones, absent the stress and lingering aftereffects of the uploading process. That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he’d agree that Gest(now) made the right choice.
I don’t think that predictably biased future preferences ought to be taken into consideration without adjusting for the bias. Let’s say I’m about to go to a party some distance away. I predict that I’ll want to drive home drunk after it; I’m also aware both that that’s a bad idea and that I won’t think it’s a bad idea six hours from now. Giving my car keys to the host predictably violates my future preferences, but I’m willing to overlook this to eliminate the possibility of wrapping my car around a fire hydrant.
That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he’d agree that Gest(now) made the right choice.
If I accept that’s true, my moral objection goes away.
I can imagine myself agreeing to be tortured in exchange for someone I love being allowed to go free. I expect that, if that offer were accepted, shortly thereafter I would agree to let my loved one be tortured in my stead if that will only make the pain stop. I expect that, if that request were granted, I would regret that choice and might in fact even agree to be tortured again.
It would not surprise me to discover that I could toggle between those states several times until I eventually had a nervous breakdown.
It’s really unclear to me how I’m supposed to account for these future selves’ expressed preferences, in that case.
It’s really unclear to me how I’m supposed to account for these future selves’ expressed preferences, in that case.
In the case that the tortured-you would make the same decision all over again, my intuition (I think) agrees with yours. My objection is basically to splitting off “selves” and subjecting them to things that the post-split self would never consent to.
OTOH, I do think I can consent now to consequences that my future self will have to suffer, even if my future self will at that point—when the benefits are past, and the costs are current—withdraw that consent.
My difficulty here is that the difference between making a choice for myself and making it for someone else actually does seem to matter to me, so reasoning from analogy to the “torture someone else” scenario isn’t obviously legitimate.
That is: let’s assume that given that choice, I would forego immortality. (Truthfully, I don’t know what I would do in that situation, and I doubt anyone else does either. I suspect it depends enormously on how the choice is framed.) It doesn’t necessarily follow that I would forego immortality in exchange for subjecting myself to it.
This is similar to the sense in which I might be willing to die to save a loved one’s life, but it doesn’t follow that I’d be willing to kill for it. It seems to matter whether or not the person I’m assigning a negative consequence to is me.
It doesn’t necessarily follow that I would forego immortality in exchange for subjecting myself to it.
But then you’re talking about putting a future-you into a situation where you know that experiences will dramatically reshape that future-you’s priorities and values, to the point where TheOtherDave(torture1000)’s decisions and preferences would diverge markedly from your current ones. I think making this decision for TheOtherDave(torture1000) is a lot like making it for someone else, given that you know TheOtherDave(torture1000) is going to object violently to this decision.
Upvote this if you would not press the button.
When I see people agreeing to 1000 years of agony, I wonder if they are allowing themselves to fully conceptualize that—if they have a real memory of prolonged agony that they can call up. I call up my memories of childbirth. I would do just about anything to avoid 1000 years of labor, and that’s not even “the most intense agony” possible.
People undergoing torture beg for death. Knowing that I would beg for death in that situation, I choose death ahead of time. If someone else made the decision for me to press the button, and I was the uploaded consciousness learning about what had happened, I would be horrified and devastated to think of myself undergoing that torture.
In fact I would die to prevent 1000 years of torture for anyone, not just myself.
Well, speaking only for myself, it’s clear that I’m not allowing myself to fully conceptualize the costs of a millenium of torture, even if I were able to, which I don’t think I actually am.
But it’s also clear that I’m not allowing myself, and am probably unable, to fully conceptualize the benefits of an immortal enjoyable life.
To put this more generally: my preference is to avoid cost/benefit tradeoffs where I can neither fully appreciate the costs nor the benefits. But, that said, I’m not sure that being able to appreciate the costs, but not the benefits, is an improvement.
Leaving all that aside, though… I suspect that, like you, I would choose to die (A) rather than choosing a long period of torture for someone else (B) if A and B were my choices. Of course, I don’t know for sure, and I hope never to find out.
But I also suspect that if I found myself in the position of already having chosen B, or of benefiting from that choice made on my behalf, I would avoid awareness of having made/benefited the choice… that is, I suspect I am not one of those who actually walks away from Omelas.
I’m not proud of that, but there it is.
Extreme pain induces perhaps the strongest emotional bias imaginable; aside from simple sadism, that’s the main reason why torture has historically been practiced. Knowing this, I’d hesitate to give much weight to my assumed preferences under torture.
More generally, wishing for death under extreme emotional stress does not imply a volitional desire for death: that’s the assumption behind suicide hotlines, etc., and I’ve seen no particular evidence that it’s a mistaken one.
I would press the button if I estimated that the uploaded copy of me stood to gain more from the upload than the copy of me being uploaded would lose in the pain of the process. The answer to that question isn’t explicitly given as a premise, but with “effective immortality” to play with, it seems reasonable to assume that it’s positive.
You’re not asking yourself whether you’d like to die, all things being equal. You’re asking yourself whether you’d prefer death to the torture. And you would—the “you” that was undergoing the torture, would.
If you could gain immortality by subjecting someone else, a stranger from a different country, someone you’ve never met, to a thousand years of torture—would you do it?
I would not.
I would prefer immediate death to a thousand subjective years of agony followed by death, if that were all that was at stake. And I’m pretty sure that after, say, a subjective year of agony, I’d happily kill off an otherwise immortal copy of me to spare myself the remaining 999 years.
But I estimate that to be true because of the emotional stress associated with extreme pain, not because of any sober cost/benefit analysis. Absent that stress, I quickly conclude that any given moment of agony for Gest(a) is worth an arbitrary but much larger number of pain-free moments of continued existence for Gest(b); and given that I’d consider both branches “me” relative to my current self, I don’t find myself with any particular reason to privilege one over the other.
People undergoing torture will betray their strongest beliefs, even sell out their friends into precisely the same fate, to get the pain to stop. Well-known. But I don’t think this reflects their true volition for any reasonable value of “true”: only a product of extreme circumstances which they’d certainly regret later, given the opportunity.
This introduces issues of coercion which don’t exist in the original problem, so I don’t believe it’s a fair analogy—but my answer is “only with their consent”. I would consider consenting to the reverse under some circumstances, and I’d be more likely to consent if the world was about to end as per the OP. I’d immediately regret it, of course, but for the reasons given above I don’t believe that should influence my decision.
I’m just not sure about the way you’re discounting the preferences of Nornagest(torture1000). In my imagination he’s there, screaming for the pain to stop, screaming that he takes it back, he takes it back, he takes it back. His preferences are so diametrically opposed to “yours” (the Nornagest making this decision now) that I almost question your right to make this decision for him.
Well, I actually do believe that Gest(torture1000)’s preferences are consistent with my current ones, absent the stress and lingering aftereffects of the uploading process. That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he’d agree that Gest(now) made the right choice.
I don’t think that predictably biased future preferences ought to be taken into consideration without adjusting for the bias. Let’s say I’m about to go to a party some distance away. I predict that I’ll want to drive home drunk after it; I’m also aware both that that’s a bad idea and that I won’t think it’s a bad idea six hours from now. Giving my car keys to the host predictably violates my future preferences, but I’m willing to overlook this to eliminate the possibility of wrapping my car around a fire hydrant.
That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he’d agree that Gest(now) made the right choice.
If I accept that’s true, my moral objection goes away.
Hm.
I can imagine myself agreeing to be tortured in exchange for someone I love being allowed to go free. I expect that, if that offer were accepted, shortly thereafter I would agree to let my loved one be tortured in my stead if that will only make the pain stop. I expect that, if that request were granted, I would regret that choice and might in fact even agree to be tortured again.
It would not surprise me to discover that I could toggle between those states several times until I eventually had a nervous breakdown.
It’s really unclear to me how I’m supposed to account for these future selves’ expressed preferences, in that case.
It’s really unclear to me how I’m supposed to account for these future selves’ expressed preferences, in that case.
In the case that the tortured-you would make the same decision all over again, my intuition (I think) agrees with yours. My objection is basically to splitting off “selves” and subjecting them to things that the post-split self would never consent to.
(nods) That’s reasonable.
OTOH, I do think I can consent now to consequences that my future self will have to suffer, even if my future self will at that point—when the benefits are past, and the costs are current—withdraw that consent.
My difficulty here is that the difference between making a choice for myself and making it for someone else actually does seem to matter to me, so reasoning from analogy to the “torture someone else” scenario isn’t obviously legitimate.
That is: let’s assume that given that choice, I would forego immortality. (Truthfully, I don’t know what I would do in that situation, and I doubt anyone else does either. I suspect it depends enormously on how the choice is framed.) It doesn’t necessarily follow that I would forego immortality in exchange for subjecting myself to it.
This is similar to the sense in which I might be willing to die to save a loved one’s life, but it doesn’t follow that I’d be willing to kill for it. It seems to matter whether or not the person I’m assigning a negative consequence to is me.
It doesn’t necessarily follow that I would forego immortality in exchange for subjecting myself to it.
But then you’re talking about putting a future-you into a situation where you know that experiences will dramatically reshape that future-you’s priorities and values, to the point where TheOtherDave(torture1000)’s decisions and preferences would diverge markedly from your current ones. I think making this decision for TheOtherDave(torture1000) is a lot like making it for someone else, given that you know TheOtherDave(torture1000) is going to object violently to this decision.