I think that the Torture versus Dust Specks “paradox” was invented to show how utilitarianism (or whatever we’re calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.
However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal’s Mugging have implied something rather different: that utilitarianism (or whatever we’re calling it) doesn’t work correctly when the numbers get too big.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Let’s consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.
You get the polling results back instantly. (Let’s make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the “obviously correct” option.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong. I’m not saying something is wrong with the decision so much as something is wrong with your decision theory.
“Would you rather be tortured for a week, or have someone else be tortured for 100 years?”
“Would you rather be tortured for 100 years, or have someone else be tortured for a week?”
The popular opinion would most likely be one week in both cases, which by this logic would lead to 3^^^3 people being tortured for a week. Utilitarianism definitely does not lead to this conclusion, so the query is not equivalent to the original question.
But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before,
I think the answer is obvious.
I’m not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it’s an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone’s utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
I would say, instead, that it gives a valid total-suffering value but that said value is not necessarily what is important. It is not how I extrapolate my intuitive aversion to suffering, for example.
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
I would say the same but substitute ‘torture’ for ‘reprehensible’. Using math in that way is essentially begging the question—the important decision is in which math to choose as a guess at our utility function after all. But at the same time I don’t consider choosing torture to be reprehensible. Because the fact that there are 3^^3 dust specks really does matter.
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
Asking this question to (let’s say) humans will cause them to believe that only one person is getting the dust speck in the eye. Of course they’re going to come up with the wrong answer if they have incomplete information.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong.
There are two problems with this. The first is that if you take a number of people as big as 3^^^3 and ask them all this question, an incomprehensibly huge number will prefer to torture the other guy. These people will be insane, demented, cruel, or dreaming or whatever, but according to your ethics they must be taken into account. (And according to mine, as well, actually). The number of people saying to torture the guy will be greater than the number of Planck lengths in the observable universe. That alone is enough disutility to say “Torture away!”
The other problem is that you assert that “something is wrong” when my decision remains the same after a wrong question is asked with incomplete information 3^^^3 times does not change my decision. What is wrong? I can tell you that your intuition that “something must be wrong” is just incorrect. Nothing is wrong with the decision. (And this paragraph is for the LCPW where everyone answered selflessly to the question, which is of course not even remotely plausible).
I think that the Torture versus Dust Specks “paradox” was invented to show how utilitarianism (or whatever we’re calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.
However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal’s Mugging have implied something rather different: that utilitarianism (or whatever we’re calling it) doesn’t work correctly when the numbers get too big.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Let’s consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.
You get the polling results back instantly. (Let’s make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the “obviously correct” option.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong. I’m not saying something is wrong with the decision so much as something is wrong with your decision theory.
I don’t think this works. Change it to:
“Would you rather be tortured for a week, or have someone else be tortured for 100 years?”
“Would you rather be tortured for 100 years, or have someone else be tortured for a week?”
The popular opinion would most likely be one week in both cases, which by this logic would lead to 3^^^3 people being tortured for a week. Utilitarianism definitely does not lead to this conclusion, so the query is not equivalent to the original question.
But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before, I think the answer is obvious.
I’m not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it’s an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone’s utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
I would say, instead, that it gives a valid total-suffering value but that said value is not necessarily what is important. It is not how I extrapolate my intuitive aversion to suffering, for example.
I would say the same but substitute ‘torture’ for ‘reprehensible’. Using math in that way is essentially begging the question—the important decision is in which math to choose as a guess at our utility function after all. But at the same time I don’t consider choosing torture to be reprehensible. Because the fact that there are 3^^3 dust specks really does matter.
Asking this question to (let’s say) humans will cause them to believe that only one person is getting the dust speck in the eye. Of course they’re going to come up with the wrong answer if they have incomplete information.
There are two problems with this. The first is that if you take a number of people as big as 3^^^3 and ask them all this question, an incomprehensibly huge number will prefer to torture the other guy. These people will be insane, demented, cruel, or dreaming or whatever, but according to your ethics they must be taken into account. (And according to mine, as well, actually). The number of people saying to torture the guy will be greater than the number of Planck lengths in the observable universe. That alone is enough disutility to say “Torture away!”
The other problem is that you assert that “something is wrong” when my decision remains the same after a wrong question is asked with incomplete information 3^^^3 times does not change my decision. What is wrong? I can tell you that your intuition that “something must be wrong” is just incorrect. Nothing is wrong with the decision. (And this paragraph is for the LCPW where everyone answered selflessly to the question, which is of course not even remotely plausible).