The answer is obvious, and it is SPECKS. I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.
Both answers assume this is a all-else-equal question. That is, we’re comparing two kinds of pain against one another. (If we’re trying to figure out what the consequences would be if the experiment happened in real life—for instance, how many will get a dust speck in their eye when driving a car—the answer is obviously different.)
I’m not sure what my ultimate reason is for picking SPECKS. I don’t believe there are any ethical theories that are watertight.
But if I had to give a reason, I would say that if I were among the 3^^^3 individuals who might get a dust speck in one’s eye, I’d say I would of course pay that to help one innocent person from being tortured. And, I can imagine that not just me would do that, but so would also many others. If we can imagine 3^^^^3 individuals, I believe we can imagine that many people agreeing to save one, for a very small cost to those experiencing it.¹
If someone then would show up and say: “Well, everyone’s individual costs were negligible, but the total cost—when added up—is actually on the order of [3^^^3 / 10²⁹] years of torture. This is much higher, so obviously that is what we should we care most about!” … I would ask then why one should care about that total number. Is there someone who experiences all the pain in the world? If not, why should we care about some non-entity? Or, if the argument is that we should care about the mulitversal bar of total utility for its own sake, how come?
Another argument is that one needs to have a consistent utility function, otherwise you’ll flip your preferences—that is, step by step by going through different preference rankings until one inevitably prefers the other position than that which one started with. But I don’t see how Yudkowsky achieves this. In this article, the most he proves is that someone, who prefers one person being tortured for 50 years to a googol number of people being tortured for a bit less than 50 years, would also prefer “a googolplex people getting a dust speck in their eye” as compared to “a googolplex/googol people getting two dust specks in their eye”. How is the latter statement inconsistent with preferring SPECKS over TORTURE? Maybe that is valid for someone who has a benthamistic utility function, but I don’t have that.
Okay, but what if not everyone agrees to getting hit by a dust speck? Ah, yes. Those. Unfortunately there are quite a few of them—maybe 4 in the LW-community and then 10k-1M (?) elsewhere—so it is too expensive to bargain with them. Unfortunately, this means they will have to be a bit inconvenienced.
So, yeah, it’s not a perfect solution; one will not find such when all moral positions can be challenged by some hypothetical scenario. But for me, this means that SPECKS are obviously much more preferable than TORTURE.
¹ For me, I’d be willing to subject myself to some small amount of torture to help one individual not be tortured. Maybe 10 seconds, maybe 30 seconds, maybe half an hour. And if 3^^^3 more would be willing to submit themselves to that, and the one who would be tortured is not some truly radical benthamite (so they would prefer themselves being tortured to a much bigger amount of torture being produced in the universe), then I’d prefer that as well. I really don’t see why it would be ethical to care about the great big utility meter—when it corresponds to no one actually feeling it.
The answer is obvious, and it is SPECKS.
I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.
Both answers assume this is a all-else-equal question. That is, we’re comparing two kinds of pain against one another. (If we’re trying to figure out what the consequences would be if the experiment happened in real life—for instance, how many will get a dust speck in their eye when driving a car—the answer is obviously different.)
I’m not sure what my ultimate reason is for picking SPECKS. I don’t believe there are any ethical theories that are watertight.
But if I had to give a reason, I would say that if I were among the 3^^^3 individuals who might get a dust speck in one’s eye, I’d say I would of course pay that to help one innocent person from being tortured. And, I can imagine that not just me would do that, but so would also many others. If we can imagine 3^^^^3 individuals, I believe we can imagine that many people agreeing to save one, for a very small cost to those experiencing it.¹
If someone then would show up and say: “Well, everyone’s individual costs were negligible, but the total cost—when added up—is actually on the order of [3^^^3 / 10²⁹] years of torture. This is much higher, so obviously that is what we should we care most about!” … I would ask then why one should care about that total number. Is there someone who experiences all the pain in the world? If not, why should we care about some non-entity? Or, if the argument is that we should care about the mulitversal bar of total utility for its own sake, how come?
Another argument is that one needs to have a consistent utility function, otherwise you’ll flip your preferences—that is, step by step by going through different preference rankings until one inevitably prefers the other position than that which one started with. But I don’t see how Yudkowsky achieves this. In this article, the most he proves is that someone, who prefers one person being tortured for 50 years to a googol number of people being tortured for a bit less than 50 years, would also prefer “a googolplex people getting a dust speck in their eye” as compared to “a googolplex/googol people getting two dust specks in their eye”. How is the latter statement inconsistent with preferring SPECKS over TORTURE? Maybe that is valid for someone who has a benthamistic utility function, but I don’t have that.
Okay, but what if not everyone agrees to getting hit by a dust speck? Ah, yes. Those. Unfortunately there are quite a few of them—maybe 4 in the LW-community and then 10k-1M (?) elsewhere—so it is too expensive to bargain with them. Unfortunately, this means they will have to be a bit inconvenienced.
So, yeah, it’s not a perfect solution; one will not find such when all moral positions can be challenged by some hypothetical scenario. But for me, this means that SPECKS are obviously much more preferable than TORTURE.
¹ For me, I’d be willing to subject myself to some small amount of torture to help one individual not be tortured. Maybe 10 seconds, maybe 30 seconds, maybe half an hour. And if 3^^^3 more would be willing to submit themselves to that, and the one who would be tortured is not some truly radical benthamite (so they would prefer themselves being tortured to a much bigger amount of torture being produced in the universe), then I’d prefer that as well. I really don’t see why it would be ethical to care about the great big utility meter—when it corresponds to no one actually feeling it.