There is an important point here: even if you can show that you can give an agent an utility function that represents following a particular moral theory, that utility function might not be the same from person to person. For example, if you believe lying violates the categorical imperative, you might not lie even to prevent ten people from lying in the future. What you are trying to minimize in this situation is incidences of you lying, rather then of lying, full stop.
But any other moral agent would (by hypothesis) also by trying to minimize their lying, and so you lose the right to say things like, “You ought to maximize the good consequences (according to some notion of good consequences),” which some would say are defining of consequentialism.
At any rate, you end up with a kind of “consequentialism” that’s in a completely different boat to, say, utilitarianism, and TBH isn’t that interesting.
Sorry, that wasn’t what I meant to convey! My point is that if you weaken the conditions for a theory being “consequentialism” enough, then obviously you’ll eventually be able to get everything in under that umbrella. But that may not be an interesting fact, it may in fact be nearly trivial. If you broaden the notion of consequences enough, and allow the good to be indexed to the agent we’re thinking about, then yes, you can make everyone a consequentialist. But that shouldn’t be that surprising. And all the major differences between, say, utilitarianism and Kantianism would remain.
Who is weakening the conditions for a theory being “consequentialism”? The thing described by Peterson seems perfectly in line with consequentialism. And his point about asymmetry among moral theories remains.
Well, there are a lot of things that get called “consequentialism” (take a look at the SEP article for a similar point). I personally find that “consequentialism” connotes to “agent-neutral” in my head, but that may just be me. I feel like requiring neutrality is a more interesting position precisely because bare consequentialism is so weak: it’s not really surprising that almost everything is a form of it.
There’s also the possibility of accidental equivocation, since people use “consequentialism” to stand for so many things. I actually think the stronger interpretations are pretty common (again, the SEP article has a little discussion on this), and so there is some danger of people thinking that this shows a stronger result than it actually does.
The problem with Kantianism-as-”consequentialism” is that the consequences you have to portray the agent as pursuing, are not very plausible ultimate goals, on the face of it. What makes the usual versions of consequentialism appealing is, in large part, the immediate plausibility of the claim that these goals (insert the particular theory of the good here) are what really and ultimately matter. If we specify particular types of actions in the goal (e.g., lying is to be minimized) and index to the agent, that immediate plausibility fades.
There is an important point here: even if you can show that you can give an agent an utility function that represents following a particular moral theory, that utility function might not be the same from person to person. For example, if you believe lying violates the categorical imperative, you might not lie even to prevent ten people from lying in the future. What you are trying to minimize in this situation is incidences of you lying, rather then of lying, full stop.
But any other moral agent would (by hypothesis) also by trying to minimize their lying, and so you lose the right to say things like, “You ought to maximize the good consequences (according to some notion of good consequences),” which some would say are defining of consequentialism.
At any rate, you end up with a kind of “consequentialism” that’s in a completely different boat to, say, utilitarianism, and TBH isn’t that interesting.
Certainly, other moral theories are not equivalent to utilitarianism, but why does that make them uninteresting to you?
Sorry, that wasn’t what I meant to convey! My point is that if you weaken the conditions for a theory being “consequentialism” enough, then obviously you’ll eventually be able to get everything in under that umbrella. But that may not be an interesting fact, it may in fact be nearly trivial. If you broaden the notion of consequences enough, and allow the good to be indexed to the agent we’re thinking about, then yes, you can make everyone a consequentialist. But that shouldn’t be that surprising. And all the major differences between, say, utilitarianism and Kantianism would remain.
Who is weakening the conditions for a theory being “consequentialism”? The thing described by Peterson seems perfectly in line with consequentialism. And his point about asymmetry among moral theories remains.
Well, there are a lot of things that get called “consequentialism” (take a look at the SEP article for a similar point). I personally find that “consequentialism” connotes to “agent-neutral” in my head, but that may just be me. I feel like requiring neutrality is a more interesting position precisely because bare consequentialism is so weak: it’s not really surprising that almost everything is a form of it.
There’s also the possibility of accidental equivocation, since people use “consequentialism” to stand for so many things. I actually think the stronger interpretations are pretty common (again, the SEP article has a little discussion on this), and so there is some danger of people thinking that this shows a stronger result than it actually does.
Nah, people argue all the time about agent neutrality. Agent-neutral consequentialism is simply one form of consequentialism, albeit a popular one.
The problem with Kantianism-as-”consequentialism” is that the consequences you have to portray the agent as pursuing, are not very plausible ultimate goals, on the face of it. What makes the usual versions of consequentialism appealing is, in large part, the immediate plausibility of the claim that these goals (insert the particular theory of the good here) are what really and ultimately matter. If we specify particular types of actions in the goal (e.g., lying is to be minimized) and index to the agent, that immediate plausibility fades.