The point of using perfect information problems is that they should be simpler to handle. If a moral system can’t handle the perfect information problems then it certainly can’t handle the more complicated problems where there is a lack of perfect information. In this regard, this is similar to looking at Newcomb’s Problem. The problem itself will never come up in that form. But if a decision theory can’t give a coherent response to Newcomb’s then there’s a problem.
The point of using perfect information problems is that they should be simpler to handle. If a moral system can’t handle the perfect information problems then it certainly can’t handle the more complicated problems where there is a lack of perfect information.
Suppose however that system A gets somewhat confused on the simple perfect-information problem, while system B handles it with perfect clarity—but when realistic complications are introduced, system B ends up being far more confused and inadequate than A, which maintains roughly the same level of confusion. In this situation, analysis based on the simple problem will suggest a wrong conclusion about the overall merits of A and B.
I believe that this is in fact the case with utilitarianism versus virtue ethics. Utilitarianism will give you clear and unambiguous answers in unrealistic simple problems with perfect-information, perfectly predictable consequences, and an intuitively obvious way to sum and compare utilities. Virtue ethics might get somewhat confused and arbitrary in these situations, but it’s not much worse for real-world problems—in which utilitarianism is usually impossible to apply in a coherent and sensible way.
Someone who claims to be confused about the trolley problem with clearly enumerated options and outcomes, but not confused about a real world problem with options and outcomes that are difficult to enumerate and predict, is being dishonest about his level of confusion. A virtue ethicist should be able to tell me whether pushing the fat man in front of the train is more virtuous, less virtuous, or as virtuous as letting the five other folks die.
I think you misunderstood my comment, and in any case, that’s a non sequitur, because the problem is not only with the complexity, but also the artificiality of the situation. I’ll try to state my position more clearly.
Let’s divide moral problems into three categories, based on: (a) how plausible the situation is in reality, and (b) whether the problem is unrealistically oversimplified in terms of knowledge, predictability, and inter-personal utility comparisons:
Plausible scenario, realistically complex.
Implausible scenario, realistically complex.
Implausible scenario, oversimplified.
(The fourth logical possibility is not realistic, since any plausible scenario will feature realistic complications.) For example, trolley problems are in category (3), while problems that appear in reality are always in categories (1) and (2), and overwhelmingly in (1).
My claim is that utilitarianism provides an exact methodology for working with type 3 problems, but it completely fails for types 1 and 2, practically without exception. On the other hand, virtue ethics turns out to be more fuzzy and subjective when compared with utilitarianism in type 3 problems (though it still handles them tolerably well), but unlike utilitarianism, it is also capable of handling types 1 and 2, and it usually handles the first (and most important) type extremely well. Therefore, it is fallacious to make general conclusions about the merits of these approaches from thought experiments with type 3 problems.
Virtue ethics handles scenarios of type 1 (plausible scenarios that are realistically complex) extremely well.
I agree with this similar statement: communities of people committed to being virtuous have good outcomes (as evaluated by Sewing-Machine). I do not agree with this similar statement: communities of people committed to being virtuous are less confused about morality than I am.
The point of using perfect information problems is that they should be simpler to handle. If a moral system can’t handle the perfect information problems then it certainly can’t handle the more complicated problems where there is a lack of perfect information. In this regard, this is similar to looking at Newcomb’s Problem. The problem itself will never come up in that form. But if a decision theory can’t give a coherent response to Newcomb’s then there’s a problem.
JoshuaZ:
Suppose however that system A gets somewhat confused on the simple perfect-information problem, while system B handles it with perfect clarity—but when realistic complications are introduced, system B ends up being far more confused and inadequate than A, which maintains roughly the same level of confusion. In this situation, analysis based on the simple problem will suggest a wrong conclusion about the overall merits of A and B.
I believe that this is in fact the case with utilitarianism versus virtue ethics. Utilitarianism will give you clear and unambiguous answers in unrealistic simple problems with perfect-information, perfectly predictable consequences, and an intuitively obvious way to sum and compare utilities. Virtue ethics might get somewhat confused and arbitrary in these situations, but it’s not much worse for real-world problems—in which utilitarianism is usually impossible to apply in a coherent and sensible way.
Someone who claims to be confused about the trolley problem with clearly enumerated options and outcomes, but not confused about a real world problem with options and outcomes that are difficult to enumerate and predict, is being dishonest about his level of confusion. A virtue ethicist should be able to tell me whether pushing the fat man in front of the train is more virtuous, less virtuous, or as virtuous as letting the five other folks die.
I think you misunderstood my comment, and in any case, that’s a non sequitur, because the problem is not only with the complexity, but also the artificiality of the situation. I’ll try to state my position more clearly.
Let’s divide moral problems into three categories, based on: (a) how plausible the situation is in reality, and (b) whether the problem is unrealistically oversimplified in terms of knowledge, predictability, and inter-personal utility comparisons:
Plausible scenario, realistically complex.
Implausible scenario, realistically complex.
Implausible scenario, oversimplified.
(The fourth logical possibility is not realistic, since any plausible scenario will feature realistic complications.) For example, trolley problems are in category (3), while problems that appear in reality are always in categories (1) and (2), and overwhelmingly in (1).
My claim is that utilitarianism provides an exact methodology for working with type 3 problems, but it completely fails for types 1 and 2, practically without exception. On the other hand, virtue ethics turns out to be more fuzzy and subjective when compared with utilitarianism in type 3 problems (though it still handles them tolerably well), but unlike utilitarianism, it is also capable of handling types 1 and 2, and it usually handles the first (and most important) type extremely well. Therefore, it is fallacious to make general conclusions about the merits of these approaches from thought experiments with type 3 problems.
I am not a utilitarian.
I agree with this similar statement: communities of people committed to being virtuous have good outcomes (as evaluated by Sewing-Machine). I do not agree with this similar statement: communities of people committed to being virtuous are less confused about morality than I am.