Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa).
I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn’t show that consequentialists can care extra about themselves, as such. My “consequentialist” said:
“I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it.”
Yours identifies a different reason. He says, “I should get the morphine. This is because there would be more total preference satisfaction if I did this.” This is a purely agent-neutral view.
My “consequentialist” is different from your consequentialist. Mine doesn’t think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn’t think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others.
You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you’re describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as such. Perhaps we are engaged in a terminological dispute about what counts as caring about your own welfare more than the welfare of others, just because it is yours?
I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa)
I don’t think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.
This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
When a view like this is on the table consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.)
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.
Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa).
I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn’t show that consequentialists can care extra about themselves, as such. My “consequentialist” said:
Yours identifies a different reason. He says, “I should get the morphine. This is because there would be more total preference satisfaction if I did this.” This is a purely agent-neutral view.
My “consequentialist” is different from your consequentialist. Mine doesn’t think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn’t think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others.
You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you’re describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as such. Perhaps we are engaged in a terminological dispute about what counts as caring about your own welfare more than the welfare of others, just because it is yours?
I don’t think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.
This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.