“Good people are consequentialists, but virtue ethics is what works,” is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.
But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare’s two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.
So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word ‘implant’; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.
How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.
That’s very interesting, but isn’t the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they’d take. “Always be kind” is also a rule. For clarity, I’d substitute the word ‘algorithm’ for ‘rules’/‘principles’. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best—be it inviolable deontological rules, character-based virtue ethics, or something else.
Level-1 is about rules which your habit and instinct can follow, but I wouldn’t say they’re ways to describe it. Here we’re talking about normative rules, not descriptive System 1/System 2 stuff.
And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I’d still feel that calling instinctual behaviour ‘virtue ethics’ is a bit strange.
not quite. The initial instincts are the system-1 “presets”. These can and do change with time. A particular entity’s current system-1 behavior are its “habits”.
Funny, I always thought it was the other way around… consequentialism is useful on the tactical level once you’ve decided what a “good outcome” is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like “is it better to be alive and suffering or to be dead”, or “is causing a human pain worse than causing a pig pain”, or “when does it become wrong to abort a fetus”, or even “is there good or bad at all?”
I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action.
The human brain is mainly a pattern-matching device—it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging—there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.
What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones… or was it the other way around? :p
Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
“Good people are consequentialists, but virtue ethics is what works,”
To nit pick a little I don’t think consequentialism even allows one to coherently speak about good people and it certainly doesn’t show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).
Moreover, don’t believe there is any sense in which one can show people who aren’t consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.
You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).
Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it’s very unclear what the ‘truth’ of consequentialism even consists of if those who follow a non-consequentialist moral theory aren’t logically incorrect.
Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren’t emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.
Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).
Overall it’s all a big mess and I don’t see any easy statements that are really correct.
“Good people are consequentialists, but virtue ethics is what works,” is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.
But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare’s two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
That’s very interesting, but isn’t the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they’d take. “Always be kind” is also a rule. For clarity, I’d substitute the word ‘algorithm’ for ‘rules’/‘principles’. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best—be it inviolable deontological rules, character-based virtue ethics, or something else.
level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.
Level-1 is about rules which your habit and instinct can follow, but I wouldn’t say they’re ways to describe it. Here we’re talking about normative rules, not descriptive System 1/System 2 stuff.
And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I’d still feel that calling instinctual behaviour ‘virtue ethics’ is a bit strange.
not quite. The initial instincts are the system-1 “presets”. These can and do change with time. A particular entity’s current system-1 behavior are its “habits”.
Funny, I always thought it was the other way around… consequentialism is useful on the tactical level once you’ve decided what a “good outcome” is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like “is it better to be alive and suffering or to be dead”, or “is causing a human pain worse than causing a pig pain”, or “when does it become wrong to abort a fetus”, or even “is there good or bad at all?”
I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action.
The human brain is mainly a pattern-matching device—it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging—there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.
What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones… or was it the other way around? :p
Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
In the sense that you can update on evidence until you can marginally predict end states?
I’m afraid I can’t think of an example where there’s a meta-level but on predictive capacity on that meta-level. Can you give an example?
I have no hope of being able to predict everything...there is always going to be a large set of end states I can’t predict?
Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?
Lo! I have been so instructed-eth! See above.
“Good people are consequentialists, but virtue ethics is what works,”
To nit pick a little I don’t think consequentialism even allows one to coherently speak about good people and it certainly doesn’t show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).
Moreover, don’t believe there is any sense in which one can show people who aren’t consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.
You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).
Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it’s very unclear what the ‘truth’ of consequentialism even consists of if those who follow a non-consequentialist moral theory aren’t logically incorrect.
Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren’t emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.
Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).
Overall it’s all a big mess and I don’t see any easy statements that are really correct.