But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before,
I think the answer is obvious.
But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before, I think the answer is obvious.