Bob: “The point of using 3^^^3 is to avoid the need to assign precise values”.
But then you are not facing up to the problems of your own ethical hypothesis. I insist that advocates of additive aggregation take seriously the problem of quantifying the exact ratio of badness between torture and speck-of-dust. The argument falls down if there is no such quantity, but how would you arrive at it, even in principle? I do not insist on an impersonally objective ratio of badness; we are talking about an idealized rational completion of one’s personal preferences, nothing more. What considerations would allow you to determine what that ratio should be?
Unknown has pointed out that anyone who takes the opposite tack, insisting that ‘any amount of X is always preferable to just one case of Y!‘, faces the problem of boundary cases: keep substituting worse and worse things for X, and eventually one will get into the territory of commensurable evils, and one will start trying to weigh up X’ against Y.
However, this is not a knockdown argument for additivism. Let us say that I am clear about my preferences for situations A, D and E, but I am in a quandary with respect to B and C. Then I am presented with an alternative moral philosophy, which offers a clear decision procedure even for B and C, but at the price of violating my original preferences for A, D or E. Should I say, oh well, the desirability of being able to decide in all situations is so great that I should accept the new system, and abandon my original preferences? Or should I just keep thinking about B and C until I find a way to decide there as well? A utility function needs only to be able to rank everything, nothing more. There is absolutely no requirement that the utility (or disutility) associated with n occurrences of some event should scale linearly with n.
This is an important counterargument so I’ll repeat it: The existence of problematic boundary cases is not yet a falsification of an ethical heuristic. Give your opponent a chance to think about the boundary cases, and see what they come up with! The same applies to my challenge to additive utilitarians, to say how they would arrive at an exact ratio: I am not asserting, apriori, that it is impossible. I am pointing out that it must be possible for your argument to be valid, and I’m giving you a chance to indicate how this can be done.
This whole thought experiment was, I believe, meant to illustrate a cognitive bias, a preference which, upon reflection, would appear to be mistaken, the mistake deriving from the principle that ‘sacred values’, such as an aversion to torture, always trump ‘nonsacred values’, like preventing minor inconveniences. But premises which pass for rational in a given time and culture—which are common sense, and just have to be so—can be wrong. The premise here is what I keep calling additivism, and we have every reason to scrutinize as critically as possible any premise which would endorse an evil of this magnitude (the 50 years of torture) as a necessary evil.
One last thought, I don’t think Ben Jones’s observation has been adequately answered. What if those 3^^^3 people are individually willing to endure the speck of dust rather than have someone tortured on their behalf? Again boundary cases arise. But if we’re seriously going to resolve this question, rather than just all reaffirm our preferred intuitions, we need to keep looking at such details.
Bob: “The point of using 3^^^3 is to avoid the need to assign precise values”.
But then you are not facing up to the problems of your own ethical hypothesis. I insist that advocates of additive aggregation take seriously the problem of quantifying the exact ratio of badness between torture and speck-of-dust. The argument falls down if there is no such quantity, but how would you arrive at it, even in principle? I do not insist on an impersonally objective ratio of badness; we are talking about an idealized rational completion of one’s personal preferences, nothing more. What considerations would allow you to determine what that ratio should be?
Unknown has pointed out that anyone who takes the opposite tack, insisting that ‘any amount of X is always preferable to just one case of Y!‘, faces the problem of boundary cases: keep substituting worse and worse things for X, and eventually one will get into the territory of commensurable evils, and one will start trying to weigh up X’ against Y.
However, this is not a knockdown argument for additivism. Let us say that I am clear about my preferences for situations A, D and E, but I am in a quandary with respect to B and C. Then I am presented with an alternative moral philosophy, which offers a clear decision procedure even for B and C, but at the price of violating my original preferences for A, D or E. Should I say, oh well, the desirability of being able to decide in all situations is so great that I should accept the new system, and abandon my original preferences? Or should I just keep thinking about B and C until I find a way to decide there as well? A utility function needs only to be able to rank everything, nothing more. There is absolutely no requirement that the utility (or disutility) associated with n occurrences of some event should scale linearly with n.
This is an important counterargument so I’ll repeat it: The existence of problematic boundary cases is not yet a falsification of an ethical heuristic. Give your opponent a chance to think about the boundary cases, and see what they come up with! The same applies to my challenge to additive utilitarians, to say how they would arrive at an exact ratio: I am not asserting, apriori, that it is impossible. I am pointing out that it must be possible for your argument to be valid, and I’m giving you a chance to indicate how this can be done.
This whole thought experiment was, I believe, meant to illustrate a cognitive bias, a preference which, upon reflection, would appear to be mistaken, the mistake deriving from the principle that ‘sacred values’, such as an aversion to torture, always trump ‘nonsacred values’, like preventing minor inconveniences. But premises which pass for rational in a given time and culture—which are common sense, and just have to be so—can be wrong. The premise here is what I keep calling additivism, and we have every reason to scrutinize as critically as possible any premise which would endorse an evil of this magnitude (the 50 years of torture) as a necessary evil.
One last thought, I don’t think Ben Jones’s observation has been adequately answered. What if those 3^^^3 people are individually willing to endure the speck of dust rather than have someone tortured on their behalf? Again boundary cases arise. But if we’re seriously going to resolve this question, rather than just all reaffirm our preferred intuitions, we need to keep looking at such details.