This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.
Utilitarianism doesn’t say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others.
If you’re unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that
Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one’s own good.
The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.
Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone’s happiness counts the same. When one maximizes the good, it is the good impartially considered. My good counts for no more than anyone else’s good. Further, the reason I have to promote the overall good is the same reason anyone else has to so promote the good. It is not peculiar to me.
Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else’s.
The “general utilitarianism” that you mention is mostly just “having a utility function”, not “utilitarianism”—utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I’ve been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.
Since classic utilitarianism reduces all morally relevant factors (Kagan 1998, 17–22) to consequences, it might appear simple. However, classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:
Consequentialism = whether an act is morally right depends only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act).
Actual Consequentialism = whether an act is morally right depends only on the actual consequences (as opposed to foreseen, foreseeable, intended, or likely consequences).
Direct Consequentialism = whether an act is morally right depends only on the consequences of that act itself (as opposed to the consequences of the agent’s motive, of a rule or practice that covers other acts of the same kind, and so on).
Evaluative Consequentialism = moral rightness depends only on the value of the consequences (as opposed to non-evaluative features of the consequences).
Hedonism = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other goods, such as freedom, knowledge, life, and so on).
Maximizing Consequentialism = moral rightness depends only on which consequences are best (as opposed to merely satisfactory or an improvement over the status quo).
Aggregative Consequentialism = which consequences are best is some function of the values of parts of those consequences (as opposed to rankings of whole worlds or sets of consequences).
Total Consequentialism = moral rightness depends only on the total net good in the consequences (as opposed to the average net good per person).
Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual’s society, present people, or any other limited group).
Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).
Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).
I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists “utilitarianism” as only one of many philosophies compatible with consequentialism.
You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
“Utilitarian” and all the other labels in normative ethics are labels for what ought to be in an agent’s utility function. So I would call this person someone who rightly stopped caring about normative philosophy.
I don’t know of a commonly agreed-upon term for that, unfortunately. “Utility maximizer”, “VNM-rational agent”, and “homo economicus” are similar to what you’re looking for, but none of these terms imply that the agent’s utility function is necessarily dependent on the welfare of others.
Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
Not just people but all the beings that serve as “vessels” for whatever it is that matters (to you). According to most common forms of utilitarianism, “utility” consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.
Thanks, but I tend to define and use my own terminology, because the standard terms are too muddled to use. I am an expert in my own terminology. Leon is talking about utilitarianism as the word is usually, or at least historically, used outside LessWrong, as a computation that everyone can perform and get the same answer, so society can agree on an action.
a computation that everyone can perform and get the same answer, so society can agree on an action.
But that computation is still a two-place function; it depends on the actual utility function used. Surely “classical” utilitarianism doesn’t just assume moral-utility realism. But without “utility realism” there is no necessary relation between the monster’s utility according to its own utility function, and the monster’s utility according to my utility function.
Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don’t apply; this is not surprising.
When I thought of a “utility monster” previously, I thought of a problem with the fact that my (and other humans’) utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.
On the other hand, saying “a utility monster is anything that assigns huge utility to itself—which forces you to assign huge utility to it too, just because it says so”—that’s just a misunderstanding of how utility works. I don’t know if it’s a strawman, but it’s definitely wrong.
I notice that I am still confused about what different people actually believe.
If by “moral-utility realism” you mean the notion that there is one true moral utility function that everyone should use, I think that’s what you’ll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there’s any alternative. I haven’t read Nozick, just summaries of him.
Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don’t know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.
(What counts as a “different” belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)
Anyway, that’s no longer a useful way to define utilitarianism, because we can use “consequentialism” for consequentialism, and happiness turns out to just be a magical word, like “God”, that you pretend the answers are hidden inside of.
“Utilitarianism” is sometimes used for both that “variant” (valuing utility) and the meaning you ascribe to it (defining “value” in terms of utility.) The Utility Monster is designed to interfere with the former meaning. Which is the correct meaning …
This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.
Utilitarianism doesn’t say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others.
ETA: PhilGoetz says otherwise. I believe that he is right, he’s an expert in the subject matter. I am surprised and confused.
If you’re unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that
Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else’s.
The “general utilitarianism” that you mention is mostly just “having a utility function”, not “utilitarianism”—utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I’ve been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.
EDIT: Also, in SEP’s Consequentialism article:
PhilGoetz is correct, but your confusion is justified; it’s bad terminology. Consequentialism is the word for what you thought utilitarianism meant.
I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists “utilitarianism” as only one of many philosophies compatible with consequentialism.
You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
I see. Thank you for clearing up the terminology.
Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others’ welfare only because they figure in his utility function, and doesn’t intrinsically care about their own utility functions?
“Utilitarian” and all the other labels in normative ethics are labels for what ought to be in an agent’s utility function. So I would call this person someone who rightly stopped caring about normative philosophy.
I don’t know of a commonly agreed-upon term for that, unfortunately. “Utility maximizer”, “VNM-rational agent”, and “homo economicus” are similar to what you’re looking for, but none of these terms imply that the agent’s utility function is necessarily dependent on the welfare of others.
Rational self-interest?
To use an Objectivist term, it’s a person who’s acting in his “properly understood self-interest”.
Not just people but all the beings that serve as “vessels” for whatever it is that matters (to you). According to most common forms of utilitarianism, “utility” consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.
Thanks, but I tend to define and use my own terminology, because the standard terms are too muddled to use. I am an expert in my own terminology. Leon is talking about utilitarianism as the word is usually, or at least historically, used outside LessWrong, as a computation that everyone can perform and get the same answer, so society can agree on an action.
But that computation is still a two-place function; it depends on the actual utility function used. Surely “classical” utilitarianism doesn’t just assume moral-utility realism. But without “utility realism” there is no necessary relation between the monster’s utility according to its own utility function, and the monster’s utility according to my utility function.
Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don’t apply; this is not surprising.
When I thought of a “utility monster” previously, I thought of a problem with the fact that my (and other humans’) utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.
On the other hand, saying “a utility monster is anything that assigns huge utility to itself—which forces you to assign huge utility to it too, just because it says so”—that’s just a misunderstanding of how utility works. I don’t know if it’s a strawman, but it’s definitely wrong.
I notice that I am still confused about what different people actually believe.
If by “moral-utility realism” you mean the notion that there is one true moral utility function that everyone should use, I think that’s what you’ll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there’s any alternative. I haven’t read Nozick, just summaries of him.
Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don’t know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.
(What counts as a “different” belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)
Anyway, that’s no longer a useful way to define utilitarianism, because we can use “consequentialism” for consequentialism, and happiness turns out to just be a magical word, like “God”, that you pretend the answers are hidden inside of.
“Utilitarianism” is sometimes used for both that “variant” (valuing utility) and the meaning you ascribe to it (defining “value” in terms of utility.) The Utility Monster is designed to interfere with the former meaning. Which is the correct meaning …