a computation that everyone can perform and get the same answer, so society can agree on an action.
But that computation is still a two-place function; it depends on the actual utility function used. Surely “classical” utilitarianism doesn’t just assume moral-utility realism. But without “utility realism” there is no necessary relation between the monster’s utility according to its own utility function, and the monster’s utility according to my utility function.
Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don’t apply; this is not surprising.
When I thought of a “utility monster” previously, I thought of a problem with the fact that my (and other humans’) utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.
On the other hand, saying “a utility monster is anything that assigns huge utility to itself—which forces you to assign huge utility to it too, just because it says so”—that’s just a misunderstanding of how utility works. I don’t know if it’s a strawman, but it’s definitely wrong.
I notice that I am still confused about what different people actually believe.
If by “moral-utility realism” you mean the notion that there is one true moral utility function that everyone should use, I think that’s what you’ll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there’s any alternative. I haven’t read Nozick, just summaries of him.
Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don’t know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.
(What counts as a “different” belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)
Anyway, that’s no longer a useful way to define utilitarianism, because we can use “consequentialism” for consequentialism, and happiness turns out to just be a magical word, like “God”, that you pretend the answers are hidden inside of.
But that computation is still a two-place function; it depends on the actual utility function used. Surely “classical” utilitarianism doesn’t just assume moral-utility realism. But without “utility realism” there is no necessary relation between the monster’s utility according to its own utility function, and the monster’s utility according to my utility function.
Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don’t apply; this is not surprising.
When I thought of a “utility monster” previously, I thought of a problem with the fact that my (and other humans’) utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.
On the other hand, saying “a utility monster is anything that assigns huge utility to itself—which forces you to assign huge utility to it too, just because it says so”—that’s just a misunderstanding of how utility works. I don’t know if it’s a strawman, but it’s definitely wrong.
I notice that I am still confused about what different people actually believe.
If by “moral-utility realism” you mean the notion that there is one true moral utility function that everyone should use, I think that’s what you’ll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there’s any alternative. I haven’t read Nozick, just summaries of him.
Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don’t know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.
(What counts as a “different” belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)
Anyway, that’s no longer a useful way to define utilitarianism, because we can use “consequentialism” for consequentialism, and happiness turns out to just be a magical word, like “God”, that you pretend the answers are hidden inside of.