Usually utilitarianism means maximize the utility of all people/agents/beings of moral worth (average or sum depending on the flavor of utilitarianism). Eliezer’s metaethics says only maximize your own utility. There is a clear distinction.
Edit: but you are correct about considering preferences the foundation of ethics. I should have been more clear
Maybe. Sometimes ethical egoism sounds like it says that you should be selfish. If that’s the case, than no, they are not the same. But sometimes it just sounds like it says you should do whatever you want to do, even if that includes helping others. If that’s the case, they sound the same to me.
edit: Actually, that’s not quite right. On the second version, egoism give the same answer as EY’s metaethics for all agents who have “what is right” as their terminal values, but NOT for any other agent. Egoism in this sense defines “should” as “should_X” where X is the agent asking what should be done. For EY, “should” is always “should_human” no matter who is asking the question.
Usually utilitarianism means maximize the utility of all people/agents/beings of moral worth (average or sum depending on the flavor of utilitarianism). Eliezer’s metaethics says only maximize your own utility. There is a clear distinction.
Indeed, but I’d like to point out that this is not an answer about what to do or what’s good and bad, merely the rejection of a commonly claimed (but incorrect) statement about what structure such an answer should have.
(Note: This comment contains positions which came from my mind without an origin tag attached. I don’t remember reading anything by Eliezer which directly disagrees with this, but I don’t represent this as anyone’s position but my own.)
“Standard” utilitarianism works by defining a separate per-agent utility functions to represent each person’s preferences, and averaging (or summing) them to produce a composite utility function which every utilitarianism is supposed to optimize. The exact details of what the per-agent utility functions look like, and how you combine them, differ from flavor to flavor. However, this structure—splitting the utility function up into per-agent utility functions plus an agent utility function—is wrong. I don’t know what a utility function that fully captured human values would look like, but I do know that it can’t be split and composed this way.
It breaks down most obviously when you start varying the number of agents; in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter); in the variant where you average utilities, an outcome where only one person exists but he lives an extra-awesome life is better than an outcome where many people lead merely-awesome lives.
Split-agent utility functions are also poorly equipped to deal with the problem of weighing agents against each other. if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
And the thing is, writing a utility function that captures human values is a hard and unsolved problem, and splitting it up by agent doesn’t actually bring us any closer; defining the single-agent function is just as hard as defining the whole thing.
I was about to cite the same sorts of things to explain why they DO disagree about what is good and bad. In other words, I agree with you about utilitarianism being wrong about the structure of ethics in precisely the way you described, but I think that also entails utilitarianism coming to different concrete ethical conclusions. If a murderer really likes murdering—it’s truly a terminal value—the utilitarian HAS to take that into account. On Eliezer’s theory, this need not be so. So you can construct a hypothetical where the utilitarian has to allow someone to be murdered simply to satisfy a (or many) murderer’s preference where on Eliezer’s theory, nothing of this nature has to be done.
That is a problem for average-over-agents utilitarianism, but not a fatal one; the per-agent utility function you use need not reflect all of that agent’s preferences, it can reflect something narrower like “that agent’s preferences excluding preferences that refer to other agents and which those agents would choose to veto”. (Of course, that’s a terrible hack, which must be added to the hacks to deal with varying population sizes, divergence, and so on, and the resulting theory ends up being extremely inelegant.)
in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter);
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
Yeah, that’s probably right. But notice that even in that case, unlike the utilitarian, there are no thorny issues about how to deal with non-human agents. If we run into an alien that has a serious preference for raping humans, the utilitarian only has ad-hoc ways of deciding whether or not the alien’s preference counts. Eliezer’s metaethics handles it elegantly: check your utility function. Of course, that’s easier said than done in the real world, but it does solve many philosophical problems associated with utilitarianism.
Usually utilitarianism means maximize the utility of all people/agents/beings of moral worth (average or sum depending on the flavor of utilitarianism). Eliezer’s metaethics says only maximize your own utility. There is a clear distinction.
Edit: but you are correct about considering preferences the foundation of ethics. I should have been more clear
Isn’t that bog-standard ethical egoism? If that is the case, then I really misunderstood the sequences.
Maybe. Sometimes ethical egoism sounds like it says that you should be selfish. If that’s the case, than no, they are not the same. But sometimes it just sounds like it says you should do whatever you want to do, even if that includes helping others. If that’s the case, they sound the same to me.
edit: Actually, that’s not quite right. On the second version, egoism give the same answer as EY’s metaethics for all agents who have “what is right” as their terminal values, but NOT for any other agent. Egoism in this sense defines “should” as “should_X” where X is the agent asking what should be done. For EY, “should” is always “should_human” no matter who is asking the question.
Indeed, but I’d like to point out that this is not an answer about what to do or what’s good and bad, merely the rejection of a commonly claimed (but incorrect) statement about what structure such an answer should have.
I think think I disagree, but I’m not sure I understand. Care to explain further?
(Note: This comment contains positions which came from my mind without an origin tag attached. I don’t remember reading anything by Eliezer which directly disagrees with this, but I don’t represent this as anyone’s position but my own.)
“Standard” utilitarianism works by defining a separate per-agent utility functions to represent each person’s preferences, and averaging (or summing) them to produce a composite utility function which every utilitarianism is supposed to optimize. The exact details of what the per-agent utility functions look like, and how you combine them, differ from flavor to flavor. However, this structure—splitting the utility function up into per-agent utility functions plus an agent utility function—is wrong. I don’t know what a utility function that fully captured human values would look like, but I do know that it can’t be split and composed this way.
It breaks down most obviously when you start varying the number of agents; in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter); in the variant where you average utilities, an outcome where only one person exists but he lives an extra-awesome life is better than an outcome where many people lead merely-awesome lives.
Split-agent utility functions are also poorly equipped to deal with the problem of weighing agents against each other. if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
And the thing is, writing a utility function that captures human values is a hard and unsolved problem, and splitting it up by agent doesn’t actually bring us any closer; defining the single-agent function is just as hard as defining the whole thing.
I was about to cite the same sorts of things to explain why they DO disagree about what is good and bad. In other words, I agree with you about utilitarianism being wrong about the structure of ethics in precisely the way you described, but I think that also entails utilitarianism coming to different concrete ethical conclusions. If a murderer really likes murdering—it’s truly a terminal value—the utilitarian HAS to take that into account. On Eliezer’s theory, this need not be so. So you can construct a hypothetical where the utilitarian has to allow someone to be murdered simply to satisfy a (or many) murderer’s preference where on Eliezer’s theory, nothing of this nature has to be done.
That is a problem for average-over-agents utilitarianism, but not a fatal one; the per-agent utility function you use need not reflect all of that agent’s preferences, it can reflect something narrower like “that agent’s preferences excluding preferences that refer to other agents and which those agents would choose to veto”. (Of course, that’s a terrible hack, which must be added to the hacks to deal with varying population sizes, divergence, and so on, and the resulting theory ends up being extremely inelegant.)
True enough, there are always more hacks a utilitarian can throw on to their theory to avoid issues like this.
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
Again, this seems like scope insensitivity.
Uh, well, it seems then that my memory tricked me. I remembered otherwise.
Though given his thoughts on extrapolation and his hopes that this will be coherent and human-universal, it would collapse into the same.
Yeah, that’s probably right. But notice that even in that case, unlike the utilitarian, there are no thorny issues about how to deal with non-human agents. If we run into an alien that has a serious preference for raping humans, the utilitarian only has ad-hoc ways of deciding whether or not the alien’s preference counts. Eliezer’s metaethics handles it elegantly: check your utility function. Of course, that’s easier said than done in the real world, but it does solve many philosophical problems associated with utilitarianism.