To be honest, I’m not entirely sure that anyone is a consequentialist.
I do use consequentialism a lot, but almost always in combination with an intuitive sort of ‘sanity check’- I will try to assign values to different outcomes and try to maximize that value in the usual way, but I instinctively shrink from any answer that tends to involve things like “start a war” or “murder hundreds of people.”
For example, consider a secret lottery where doctors quietly murder one out of every [n] thousand patients, in order to harvest their organs and save more lives than they take. There are consequentialist arguments against this, such as the risk of discovery and consequent devaluation of hospitals, but I don’t reject this idea because I’ve assigned QALY values to each outcome. I reject it because a conspiracy of murder-doctors is bad.
On the one hand, it’s easy to say that this is a moral failing on my part, and it might be that simple. Sainthood in deontological religious traditions looks like sitting in the desert for forty years; sainthood in consequentialist moral traditions probably looks more like Bond villainy. (The relative lack of real-world Bond villainy is part of what makes me suspect that there might be no consequentialists.)
But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness. So it seems important to have an ability to step back and ask, “am I morally insane?”, commensurate with one’s degree of confidence in the metric and method of consequentialism.
This sounds to me very strongly like a rejection of utilitarianism, not of consequentialism.
Presumably you don’t have ontologically basic objections to a conspiracy of murder doctors because “conspiracy” “murder” and “doctor” are all not ontologically basic. And you aren’t saying “this is wrong because murder is wrong” or “this is wrong because they are bad people for doing it.” You’re saying “this is wrong because it results in a bad world-state.”
Consequentialism only requires a partial ordering of worlds, not a metric; and satisficing under uncertainty over a family of possible utility functions or similar probably looks a lot more like usual good behavior.
I do agree that there are “no real world utilitarians” in the sense of having certainty in a specific utility function though, with Peter Singer being the possible exception and also looking kind of like a bond villain.
But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness.
Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.
But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?
So it seems important to have an ability to step back and ask, “am I morally insane?”, commensurate with one’s degree of confidence in the metric and method of consequentialism.
It seems to me that any moral agent should have this ability.
To be honest, I’m not entirely sure that anyone is a consequentialist.
I do use consequentialism a lot, but almost always in combination with an intuitive sort of ‘sanity check’- I will try to assign values to different outcomes and try to maximize that value in the usual way, but I instinctively shrink from any answer that tends to involve things like “start a war” or “murder hundreds of people.”
For example, consider a secret lottery where doctors quietly murder one out of every [n] thousand patients, in order to harvest their organs and save more lives than they take. There are consequentialist arguments against this, such as the risk of discovery and consequent devaluation of hospitals, but I don’t reject this idea because I’ve assigned QALY values to each outcome. I reject it because a conspiracy of murder-doctors is bad.
On the one hand, it’s easy to say that this is a moral failing on my part, and it might be that simple. Sainthood in deontological religious traditions looks like sitting in the desert for forty years; sainthood in consequentialist moral traditions probably looks more like Bond villainy. (The relative lack of real-world Bond villainy is part of what makes me suspect that there might be no consequentialists.)
But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness. So it seems important to have an ability to step back and ask, “am I morally insane?”, commensurate with one’s degree of confidence in the metric and method of consequentialism.
This sounds to me very strongly like a rejection of utilitarianism, not of consequentialism.
Presumably you don’t have ontologically basic objections to a conspiracy of murder doctors because “conspiracy” “murder” and “doctor” are all not ontologically basic. And you aren’t saying “this is wrong because murder is wrong” or “this is wrong because they are bad people for doing it.” You’re saying “this is wrong because it results in a bad world-state.”
Consequentialism only requires a partial ordering of worlds, not a metric; and satisficing under uncertainty over a family of possible utility functions or similar probably looks a lot more like usual good behavior.
I do agree that there are “no real world utilitarians” in the sense of having certainty in a specific utility function though, with Peter Singer being the possible exception and also looking kind of like a bond villain.
Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.
But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?
It seems to me that any moral agent should have this ability.