I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists “utilitarianism” as only one of many philosophies compatible with consequentialism.
You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
“Utilitarian” and all the other labels in normative ethics are labels for what ought to be in an agent’s utility function. So I would call this person someone who rightly stopped caring about normative philosophy.
I don’t know of a commonly agreed-upon term for that, unfortunately. “Utility maximizer”, “VNM-rational agent”, and “homo economicus” are similar to what you’re looking for, but none of these terms imply that the agent’s utility function is necessarily dependent on the welfare of others.
Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
Not just people but all the beings that serve as “vessels” for whatever it is that matters (to you). According to most common forms of utilitarianism, “utility” consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.
PhilGoetz is correct, but your confusion is justified; it’s bad terminology. Consequentialism is the word for what you thought utilitarianism meant.
I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists “utilitarianism” as only one of many philosophies compatible with consequentialism.
You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
I see. Thank you for clearing up the terminology.
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
“Utilitarian” and all the other labels in normative ethics are labels for what ought to be in an agent’s utility function. So I would call this person someone who rightly stopped caring about normative philosophy.
I don’t know of a commonly agreed-upon term for that, unfortunately. “Utility maximizer”, “VNM-rational agent”, and “homo economicus” are similar to what you’re looking for, but none of these terms imply that the agent’s utility function is necessarily dependent on the welfare of others.
Rational self-interest?
To use an Objectivist term, it’s a person who’s acting in his “properly understood self-interest”.
Not just people but all the beings that serve as “vessels” for whatever it is that matters (to you). According to most common forms of utilitarianism, “utility” consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.