“Good people are consequentialists, but virtue ethics is what works,”
To nit pick a little I don’t think consequentialism even allows one to coherently speak about good people and it certainly doesn’t show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).
Moreover, don’t believe there is any sense in which one can show people who aren’t consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.
You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).
Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it’s very unclear what the ‘truth’ of consequentialism even consists of if those who follow a non-consequentialist moral theory aren’t logically incorrect.
Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren’t emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.
Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).
Overall it’s all a big mess and I don’t see any easy statements that are really correct.
“Good people are consequentialists, but virtue ethics is what works,”
To nit pick a little I don’t think consequentialism even allows one to coherently speak about good people and it certainly doesn’t show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).
Moreover, don’t believe there is any sense in which one can show people who aren’t consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.
You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).
Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it’s very unclear what the ‘truth’ of consequentialism even consists of if those who follow a non-consequentialist moral theory aren’t logically incorrect.
Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren’t emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.
Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).
Overall it’s all a big mess and I don’t see any easy statements that are really correct.