The problem with that one is it comes across as an attempt to define the objection out of existence—it basically demands that you assume that X negative utility spread out across a large number of people really is just as bad as X negative utility concentrated on one person. “Shut up and multiply” only works if you assume that the numbers can be multiplied in that way.
That’s also the only way an interesting discussion can be held about it—if that premise is granted, all you have to do is make the number of specks higher and higher until the numbers balance out.
(And it’s in no way equivalent to the trolley problem because the trolley problem is comparing deaths with deaths)
For some reason, people keep thinking that Torture vs. Specks was written as an argument for utilitarianism. That makes no sense, because it’s the sort of thing that makes utilitarians squirm and deontologists gloat. What it is, instead, is a demand that if you’re going to call yourself a utilitarian, you’d better really mean it.
EY’s actual arguments for utilitarianism are an attempt to get you to conclude that you should choose Torture over Specks, despite the fact that it feels wrong on a gut level.
For some reason, people keep thinking that Torture vs. Specks was written as an argument for utilitarianism. That makes no sense, because it’s the sort of thing that makes utilitarians squirm and deontologists gloat.
That gloating makes even less sense! There are people who gloat that their morality advocates doing that much additional harm to people? That sounds like a terrible move!
It seems to me that by the time you evaluate which one of two options are worse you have arrived both at the decision you would advocate and the decision you would be proud of. The only remaining causes for boasting being biased after you have thought it through would be if you thought the target audience would be particularly made up by people on your team.
TvDS is a thought experiment in which (particular flavors of) deontology support a conclusion that most people find comfortable (“torture is bad, dust specks in your eye are no big deal”) and (particular flavors of) utilitarianism support a conclusion that most people find uncomfortable (“torture is no big deal, dust specks in your eye are bad”).
It makes perfect sense to me that people find satisfying being exposed to arguments in which their previously held positions make them feel comfortable, and find disquieting being exposed to arguments in which their previously held positions make them feel uncomfortable.
My point is that the motive for the boast is just that most people are naturally deontologists and so can be anticipated to agree with the deontological boast. Aside from that it is trivially the case that people can be expected to be proud of reaching the correct moral decision based on the fact that they arrived at any decision at all.
I think the second thing. I don’t actually think being a deontologist, per se, is morally required—you just have to do the things it requires, not necessarily for the relevant principled reasons.
I choose specks, but I found the discussion very helpful nonetheless.
Specifically, I learned that if you believe suffering is additive in any way, choosing torture is the only answer that makes sense. If you don’t believe that (and I don’t), then your references to “negative utility” are not as well defined as you think.
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion.
The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That’s an important structural difference to the thought experiment.
Yes, the ridicule was annoying, although I think many have learned their lesson.
The problem with our position is that it leaves us vulnerable to being Dutch-booked by opponents who are willing to be sufficiently cruel. (How much would you pay not to be tortured? Why not that amount plus $10?)
Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?
It’s similar, but it’s not quite a restatement. Average utilitarianism seems to suggest “torture” when presented with TvDS, for example, while it doesn’t support the Repugnant Conclusion as it’s usually formulated.
The problem with that one is it comes across as an attempt to define the objection out of existence—it basically demands that you assume that X negative utility spread out across a large number of people really is just as bad as X negative utility concentrated on one person. “Shut up and multiply” only works if you assume that the numbers can be multiplied in that way.
That’s also the only way an interesting discussion can be held about it—if that premise is granted, all you have to do is make the number of specks higher and higher until the numbers balance out.
(And it’s in no way equivalent to the trolley problem because the trolley problem is comparing deaths with deaths)
For some reason, people keep thinking that Torture vs. Specks was written as an argument for utilitarianism. That makes no sense, because it’s the sort of thing that makes utilitarians squirm and deontologists gloat. What it is, instead, is a demand that if you’re going to call yourself a utilitarian, you’d better really mean it.
EY’s actual arguments for utilitarianism are an attempt to get you to conclude that you should choose Torture over Specks, despite the fact that it feels wrong on a gut level.
That gloating makes even less sense! There are people who gloat that their morality advocates doing that much additional harm to people? That sounds like a terrible move!
It seems to me that by the time you evaluate which one of two options are worse you have arrived both at the decision you would advocate and the decision you would be proud of. The only remaining causes for boasting being biased after you have thought it through would be if you thought the target audience would be particularly made up by people on your team.
TvDS is a thought experiment in which (particular flavors of) deontology support a conclusion that most people find comfortable (“torture is bad, dust specks in your eye are no big deal”) and (particular flavors of) utilitarianism support a conclusion that most people find uncomfortable (“torture is no big deal, dust specks in your eye are bad”).
It makes perfect sense to me that people find satisfying being exposed to arguments in which their previously held positions make them feel comfortable, and find disquieting being exposed to arguments in which their previously held positions make them feel uncomfortable.
My point is that the motive for the boast is just that most people are naturally deontologists and so can be anticipated to agree with the deontological boast. Aside from that it is trivially the case that people can be expected to be proud of reaching the correct moral decision based on the fact that they arrived at any decision at all.
*gloat*
That is even more fun as an emote than I thought it would be.
Do you have some preexisting explanation for why you’re a deontologist?
I am experiencing a strong desire at this moment for Alicorn to reply “Because it’s the right thing to be.”
It is only marginally stronger than my desire for her to reply “Because I expect it to have good results,” though.
Reminds me of Hitchens’ cheeky response to questions about free will: “Yes, I have free will; I have no choice but to have it.”
Personally, I’m a virtue ethicist because it has better outcomes. Though I reason consequentially when it’s the right thing to do.
I think “because it’s the right thing to be” sounds more virtue-ethicist than deontologist.
Is “because I should be” better?
Or do I not understand deontology well enough to make this joke?
I think the second thing. I don’t actually think being a deontologist, per se, is morally required—you just have to do the things it requires, not necessarily for the relevant principled reasons.
That depends on how reflexive your particular set of rules are...
This post and the comments under it might help.
I choose specks, but I found the discussion very helpful nonetheless.
Specifically, I learned that if you believe suffering is additive in any way, choosing torture is the only answer that makes sense. If you don’t believe that (and I don’t), then your references to “negative utility” are not as well defined as you think.
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion
The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That’s an important structural difference to the thought experiment.
Right. The problem was the people on that side seemed to have a tendency to ridicule the belief that it is not.
Yes, the ridicule was annoying, although I think many have learned their lesson.
The problem with our position is that it leaves us vulnerable to being Dutch-booked by opponents who are willing to be sufficiently cruel. (How much would you pay not to be tortured? Why not that amount plus $10?)
Hmm … what examples of learning their lesson are you thinking of?
This is a much more mature response to the debate.
Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?
It’s similar, but it’s not quite a restatement. Average utilitarianism seems to suggest “torture” when presented with TvDS, for example, while it doesn’t support the Repugnant Conclusion as it’s usually formulated.