(Note: This comment contains positions which came from my mind without an origin tag attached. I don’t remember reading anything by Eliezer which directly disagrees with this, but I don’t represent this as anyone’s position but my own.)
“Standard” utilitarianism works by defining a separate per-agent utility functions to represent each person’s preferences, and averaging (or summing) them to produce a composite utility function which every utilitarianism is supposed to optimize. The exact details of what the per-agent utility functions look like, and how you combine them, differ from flavor to flavor. However, this structure—splitting the utility function up into per-agent utility functions plus an agent utility function—is wrong. I don’t know what a utility function that fully captured human values would look like, but I do know that it can’t be split and composed this way.
It breaks down most obviously when you start varying the number of agents; in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter); in the variant where you average utilities, an outcome where only one person exists but he lives an extra-awesome life is better than an outcome where many people lead merely-awesome lives.
Split-agent utility functions are also poorly equipped to deal with the problem of weighing agents against each other. if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
And the thing is, writing a utility function that captures human values is a hard and unsolved problem, and splitting it up by agent doesn’t actually bring us any closer; defining the single-agent function is just as hard as defining the whole thing.
I was about to cite the same sorts of things to explain why they DO disagree about what is good and bad. In other words, I agree with you about utilitarianism being wrong about the structure of ethics in precisely the way you described, but I think that also entails utilitarianism coming to different concrete ethical conclusions. If a murderer really likes murdering—it’s truly a terminal value—the utilitarian HAS to take that into account. On Eliezer’s theory, this need not be so. So you can construct a hypothetical where the utilitarian has to allow someone to be murdered simply to satisfy a (or many) murderer’s preference where on Eliezer’s theory, nothing of this nature has to be done.
That is a problem for average-over-agents utilitarianism, but not a fatal one; the per-agent utility function you use need not reflect all of that agent’s preferences, it can reflect something narrower like “that agent’s preferences excluding preferences that refer to other agents and which those agents would choose to veto”. (Of course, that’s a terrible hack, which must be added to the hacks to deal with varying population sizes, divergence, and so on, and the resulting theory ends up being extremely inelegant.)
in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter);
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
(Note: This comment contains positions which came from my mind without an origin tag attached. I don’t remember reading anything by Eliezer which directly disagrees with this, but I don’t represent this as anyone’s position but my own.)
“Standard” utilitarianism works by defining a separate per-agent utility functions to represent each person’s preferences, and averaging (or summing) them to produce a composite utility function which every utilitarianism is supposed to optimize. The exact details of what the per-agent utility functions look like, and how you combine them, differ from flavor to flavor. However, this structure—splitting the utility function up into per-agent utility functions plus an agent utility function—is wrong. I don’t know what a utility function that fully captured human values would look like, but I do know that it can’t be split and composed this way.
It breaks down most obviously when you start varying the number of agents; in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter); in the variant where you average utilities, an outcome where only one person exists but he lives an extra-awesome life is better than an outcome where many people lead merely-awesome lives.
Split-agent utility functions are also poorly equipped to deal with the problem of weighing agents against each other. if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
And the thing is, writing a utility function that captures human values is a hard and unsolved problem, and splitting it up by agent doesn’t actually bring us any closer; defining the single-agent function is just as hard as defining the whole thing.
I was about to cite the same sorts of things to explain why they DO disagree about what is good and bad. In other words, I agree with you about utilitarianism being wrong about the structure of ethics in precisely the way you described, but I think that also entails utilitarianism coming to different concrete ethical conclusions. If a murderer really likes murdering—it’s truly a terminal value—the utilitarian HAS to take that into account. On Eliezer’s theory, this need not be so. So you can construct a hypothetical where the utilitarian has to allow someone to be murdered simply to satisfy a (or many) murderer’s preference where on Eliezer’s theory, nothing of this nature has to be done.
That is a problem for average-over-agents utilitarianism, but not a fatal one; the per-agent utility function you use need not reflect all of that agent’s preferences, it can reflect something narrower like “that agent’s preferences excluding preferences that refer to other agents and which those agents would choose to veto”. (Of course, that’s a terrible hack, which must be added to the hacks to deal with varying population sizes, divergence, and so on, and the resulting theory ends up being extremely inelegant.)
True enough, there are always more hacks a utilitarian can throw on to their theory to avoid issues like this.
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
Again, this seems like scope insensitivity.