This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
When a view like this is on the table consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.)
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.
This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.