This is called the “consequentialist doppelganger” phenomenon, when I’ve heard it described, and it’s very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:
What would the world be like if everyone followed Non-Consequentialism X?
You should act to achieve the outcome yielded by Step 1.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Mainly, I just wanted to point out that when whoever-it-was above mentioned “your utility function”, you probably should have interpreted that as “your preferences”.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
There should be a “Deontology for Consequentialists” post, if there isn’t already.
Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn’t about maximizing anything; that’s a deontologist’s way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn’t some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that’s all.
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
Strictly, no. Virtue ethics is self-regarding that way. But it isn’t like virtue ethics says you shouldn’t care about other people’s virtue. It just isn’t calculated at that level of the theory. Helping other people be virtuous is the compassionate and generous thing to do.
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
Such a person is sometimes called a “Mad Bodhisattva”.
Certainly a way I’ve framed it in the past (and it sounds perfectly in line with the Confucian conception of virtue ethics) but I don’t think it’s quite right. At the very least, it’s worth mentioning that a lot of virtue ethicists don’t believe a theory of right action is appropriately part of virtue ethics.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Not to butt in but “x is morally obligatory” is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.
This is called the “consequentialist doppelganger” phenomenon, when I’ve heard it described, and it’s very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:
What would the world be like if everyone followed Non-Consequentialism X?
You should act to achieve the outcome yielded by Step 1.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Mainly, I just wanted to point out that when whoever-it-was above mentioned “your utility function”, you probably should have interpreted that as “your preferences”.
There should be a “Deontology for Consequentialists” post, if there isn’t already.
I might write that.
Perhaps I should write “Utilitarianism for Deontologists”. Here goes:
“Follow the maxim: ‘Maximize utility’”.
Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn’t about maximizing anything; that’s a deontologist’s way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn’t some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that’s all.
I’d like to see you write Virtue Ethics for Consequentialists, or for Deontologists.
“Being virtuous is obligatory, being vicious is forbidden.”
This feels like cheating.
“Do that which leads to people being virtuous.”
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
How about, “Maximize your virtue.”
So other people’s virtue is worth nothing?
Strictly, no. Virtue ethics is self-regarding that way. But it isn’t like virtue ethics says you shouldn’t care about other people’s virtue. It just isn’t calculated at that level of the theory. Helping other people be virtuous is the compassionate and generous thing to do.
Agreed, at least on the common (recent American) ethical egoist reading of virtue ethics.
Such a person is sometimes called a “Mad Bodhisattva”.
Certainly a way I’ve framed it in the past (and it sounds perfectly in line with the Confucian conception of virtue ethics) but I don’t think it’s quite right. At the very least, it’s worth mentioning that a lot of virtue ethicists don’t believe a theory of right action is appropriately part of virtue ethics.
Please do. I’d love to read it.
Ha! I was about to say, “I wonder if Alicorn might be interested in writing such a post”.
Not to butt in but “x is morally obligatory” is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.