I’ve been trying to wrap my head around the SPECKS vs TORTURE argument, and I still haven’t been able to convince myself that TORTURE is the right answer.
One idea that I had would be to apply the whole thing to myself. Suppose Omega comes to me and offers me two choices:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with no lasting harm.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
I have to say that I would still pick choice 2 for myself. I know that if I add up the utilities in any standard way, that option 2 is going to be way lower, but I still can’t get myself to choose 1. Even if you move the torture time so that it’s random or at the end (to get rid of near mode thinking), I still intuitively prefer 2 quite strongly.
Even though I can’t formalize why I think option 2 is better, feeling that it is the right choice for myself makes me a bit more confidant that SPECKS would be the right choice as well. Also, this thought experiment makes me think the intuitive choice for SPECKS is less about fairness than I thought.
If anyone has any more insight about this, that would be helpful.
No novel insights; you’ve precisely put your finger on why this example is interesting: it pits our intuitions against the conclusions of a certain flavor of utilitarianism. If we embrace that flavor of utilitarianism, we must acknowledge that our intuitions are unreliable. To accept our intuitions as definitive, we must reject that flavor of utilitarianism. If we wish to keep both, we must find a radically different way of framing the scenario.
The interesting stuff is in what comes next. If I reject that flavor of utilitarianism, what do I use instead, and how does that affect my beliefs about right action? If I reject my intuitions as a reliable source of information about good and bad outcomes, what do I use instead, and how does that affect my beliefs about right action? If I try to synthesize the apparent contradiction, how might I do that, and where does that leave me?
It’s not obvious that the ‘utilities’ for different people should add as sublinearly as those for one person do. So a better comparison would be whether you prefer to receive a dust speck in the eye with probability p or 50 years of torture with probability p/3^^^3. (This is essentially the veil-of-ignorance thing, where p is 3^^^3 divided by the total population.)
Wow, it sounds terribly like Pascal’s Mugging now. Had anyone noticed that before?
How do you feel about this framing? Would you rather have a 1 in 3^^^3 chance of being tortured for 50 years, or get a dust speck in your eye? (This is analogous to jaywalking vs. waiting a minute for a crosswalk.)
The way you phrase it, it makes me think this caveat is really the key point. Consider if Omega doesn’t offer that and says this:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
My intuitive response would be “Don’t pick 50 years of torture, you’ll die!” Which is generally the case. It’s explicitly not the case in the first scenario, because of the “but with no lasting harm.” caveat. But without that caveat, I doubt I would survive 50 years of torture, which means that what would happen afterwards is useless, since I’d be dead.
For instance, imagine if the torture disutility was something like bleeding.
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be lose 50 gallons of blood all at once first.
I can have a satisfying and fulfilling life for 3^^^3 days, but one blood cell will be removed from my body every day.
Or alternatively, starving.
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to go without food for 50 years first.
I can have a satisfying and fulfilling life for 3^^^3 days, but one crumb will be removed from my plate every day.
My intuitive response will successfully allow me to avoid death!
But with the caveat in, your intuitive response consigns you to a greater total inconvenience because it doesn’t quite get the caveat or doesn’t trust the person giving the caveat.
Now, Omega is defined as generating circumstances which are 100% trustworthy. So to properly grasp the question on an intuitive level means you have to intuitively grasp caveats such as “I am certain that I know that I am talking to Omega, Omega is certainly correct at all times, and Omega said I certainly won’t suffer any lasting harm, and I certainly understood Omega correctly when he said that.” Because that’s all stipulated as the fine print caveats in an Omega problem, in general.
If you think to yourself “Well, I’m NOT certain of any of those, I’m just really really sure of them!” and then rerun the numbers, then I think the intuitive response goes back to being correct. I mean, consider the following question where you aren’t certain about that caveat, just really sure:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with a 99.99% chance of no lasting harm and a .01% chance of death.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
Does this seem insightful, or am I missing something?
So to properly grasp the question on an intuitive level means you have to intuitively grasp caveats such as “I am certain that I know that I am talking to Omega, Omega is certainly correct at all times, and Omega said I certainly won’t suffer any lasting harm, and I certainly understood Omega correctly when he said that.”
My intuitive response would be “Don’t pick 50 years of torture, you’ll die!” Which is generally the case. It’s explicitly not the case in the first scenario, because of the “but with no lasting harm.” caveat. But without that caveat, I doubt I would survive 50 years of torture, which means that what would happen afterwards is useless, since I’d be dead.
Being dead does not seem to fit the description “have a satisfying and fulfilling life for 3^^^3 days” with or without the caveat. Instead you should be concerned that the “lasting harm” changes you in such a way that what remains is still ‘satisfied and fulfilled’ but in such a way that you as of now would not consider the outcome desirable or would consider the person remaining after the torture to be sufficiently not-you-anymore.
Do you feel like baseline happiness makes a difference? If not, imagine starting with 3^^^3+1 people each being tortured for 50 years. You can get one of them out of torture, at the expense of a pain increase equivalent to a speck of dust in the eye for each of the others. If you do this for everyone, each one will have a 3^^^3 speck of dust equivalent, an amount of pain far surpassing 50 years of torture.
Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won’t be around to regret it afterward. It isn’t something to seek, since you won’t have any positive utility afterward, but it isn’t something that is worth enduring much suffering to avoid either.
If you read the second sentence, I do too; it’s just a very weak disadvantage when compared to almost any suffering. If I didn’t consider it at least somewhat disadvantageous, I wouldn’t be around now to write about it.
I’ve been trying to wrap my head around the SPECKS vs TORTURE argument, and I still haven’t been able to convince myself that TORTURE is the right answer.
One idea that I had would be to apply the whole thing to myself. Suppose Omega comes to me and offers me two choices:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with no lasting harm.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
I have to say that I would still pick choice 2 for myself. I know that if I add up the utilities in any standard way, that option 2 is going to be way lower, but I still can’t get myself to choose 1. Even if you move the torture time so that it’s random or at the end (to get rid of near mode thinking), I still intuitively prefer 2 quite strongly.
Even though I can’t formalize why I think option 2 is better, feeling that it is the right choice for myself makes me a bit more confidant that SPECKS would be the right choice as well. Also, this thought experiment makes me think the intuitive choice for SPECKS is less about fairness than I thought.
If anyone has any more insight about this, that would be helpful.
No novel insights; you’ve precisely put your finger on why this example is interesting: it pits our intuitions against the conclusions of a certain flavor of utilitarianism. If we embrace that flavor of utilitarianism, we must acknowledge that our intuitions are unreliable. To accept our intuitions as definitive, we must reject that flavor of utilitarianism. If we wish to keep both, we must find a radically different way of framing the scenario.
The interesting stuff is in what comes next. If I reject that flavor of utilitarianism, what do I use instead, and how does that affect my beliefs about right action? If I reject my intuitions as a reliable source of information about good and bad outcomes, what do I use instead, and how does that affect my beliefs about right action? If I try to synthesize the apparent contradiction, how might I do that, and where does that leave me?
It’s not obvious that the ‘utilities’ for different people should add as sublinearly as those for one person do. So a better comparison would be whether you prefer to receive a dust speck in the eye with probability p or 50 years of torture with probability p/3^^^3. (This is essentially the veil-of-ignorance thing, where p is 3^^^3 divided by the total population.)
Wow, it sounds terribly like Pascal’s Mugging now. Had anyone noticed that before?
How do you feel about this framing? Would you rather have a 1 in 3^^^3 chance of being tortured for 50 years, or get a dust speck in your eye? (This is analogous to jaywalking vs. waiting a minute for a crosswalk.)
Thinking about it as if it’s a meaningful choice may soften you up for Pascal’s scams.
The way you phrase it, it makes me think this caveat is really the key point. Consider if Omega doesn’t offer that and says this:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
My intuitive response would be “Don’t pick 50 years of torture, you’ll die!” Which is generally the case. It’s explicitly not the case in the first scenario, because of the “but with no lasting harm.” caveat. But without that caveat, I doubt I would survive 50 years of torture, which means that what would happen afterwards is useless, since I’d be dead.
For instance, imagine if the torture disutility was something like bleeding.
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be lose 50 gallons of blood all at once first.
I can have a satisfying and fulfilling life for 3^^^3 days, but one blood cell will be removed from my body every day.
Or alternatively, starving.
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to go without food for 50 years first.
I can have a satisfying and fulfilling life for 3^^^3 days, but one crumb will be removed from my plate every day.
My intuitive response will successfully allow me to avoid death!
But with the caveat in, your intuitive response consigns you to a greater total inconvenience because it doesn’t quite get the caveat or doesn’t trust the person giving the caveat.
Now, Omega is defined as generating circumstances which are 100% trustworthy. So to properly grasp the question on an intuitive level means you have to intuitively grasp caveats such as “I am certain that I know that I am talking to Omega, Omega is certainly correct at all times, and Omega said I certainly won’t suffer any lasting harm, and I certainly understood Omega correctly when he said that.” Because that’s all stipulated as the fine print caveats in an Omega problem, in general.
If you think to yourself “Well, I’m NOT certain of any of those, I’m just really really sure of them!” and then rerun the numbers, then I think the intuitive response goes back to being correct. I mean, consider the following question where you aren’t certain about that caveat, just really sure:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with a 99.99% chance of no lasting harm and a .01% chance of death.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
Does this seem insightful, or am I missing something?
Trouble is, I am running on corrupted hardware, and am not capable of being in the epistemic state that the problem asks me to occupy. Pretending that I am capable of such epistemic states, when I am not, seems like a pretty bad idea.
Being dead does not seem to fit the description “have a satisfying and fulfilling life for 3^^^3 days” with or without the caveat. Instead you should be concerned that the “lasting harm” changes you in such a way that what remains is still ‘satisfied and fulfilled’ but in such a way that you as of now would not consider the outcome desirable or would consider the person remaining after the torture to be sufficiently not-you-anymore.
Does “no lasting harm” make sense if we’re talking about human beings?
Does it matter how long recovery takes? (Probably not, so long as it’s not an amount of time which requires special notation.)
Can being removed from your usual life for fifty years count as lasting harm?
Do you feel like baseline happiness makes a difference? If not, imagine starting with 3^^^3+1 people each being tortured for 50 years. You can get one of them out of torture, at the expense of a pain increase equivalent to a speck of dust in the eye for each of the others. If you do this for everyone, each one will have a 3^^^3 speck of dust equivalent, an amount of pain far surpassing 50 years of torture.
I prefer oblivion to significant amounts of negative utility for any sustained period.
Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won’t be around to regret it afterward. It isn’t something to seek, since you won’t have any positive utility afterward, but it isn’t something that is worth enduring much suffering to avoid either.
I judge that a disadvantage.
If you read the second sentence, I do too; it’s just a very weak disadvantage when compared to almost any suffering. If I didn’t consider it at least somewhat disadvantageous, I wouldn’t be around now to write about it.
That seems to imply that you would rather commit suicide than, say, endure a toothache for a few days. Really?
Putting the torture first changes things, we discount future events, so of course the torture scenario seems worse.