If I understand correctly, people become utilitarians because they think that global suffering/well-being have such big values that all the other values don’t really matter (this is what I see every time someone tries to argue for utilitarianism, (2) please correct me if I’m wrong). I think a lot of people don’t share this view, and therefore, before trying to convince them they should choose utilitarianism as their morality, you first need to convince them about the value of harm-pleasure.
I think it depends? People around here use utilitarianism to mean a few different things. I imagine that’s the version talked about the most because the people involved in EA tend to be those types (since it’s easier to get extra value via hacking if your most important values are something very specific and somewhat measurable). I think that might also be the usual philosopher’s definition. But then Eliezer (in the metaethics sequence) used “utilitarianism” to mean a general approach to ethics where you add up all the values involved and pick the best outcome, regardless of what your values are and how you weight them. So it’s sometimes a little confusing to know what utilitarianism means around here.
Differences arise when you try to flesh out what “best consequences” means. A lot of people on this site seem to think utilitarianism interprets “best consequences” as “best consequences according to your own utility function”. This is actually not what ethicists mean when they talk about utilitarianism. They might mean something like “best consequences according to some aggregation of the utility functions of all agents” (where there is disagreement about what the right aggregation mechanism is or what counts as an agent). Or they might interpret “best consequences” as “consequences that maximize the aggregate pleasure experienced by agents” (usually treating suffering as negative pleasure). Other interpretations also exist.
As far as I’ve read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that’s what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.
I agree that the term tends to be used too broadly around here—probably because the term sounds like it points to something along the lines of “an ethic based on evaluating a utility function against options”, which is actually closer to a working definition of consequentialism. It’s not a word that’s especially well defined, though, even in philosophy.
If I understand correctly, people become utilitarians because they think that global suffering/well-being have such big values that all the other values don’t really matter (this is what I see every time someone tries to argue for utilitarianism, (2) please correct me if I’m wrong). I think a lot of people don’t share this view, and therefore, before trying to convince them they should choose utilitarianism as their morality, you first need to convince them about the value of harm-pleasure.
I think it depends? People around here use utilitarianism to mean a few different things. I imagine that’s the version talked about the most because the people involved in EA tend to be those types (since it’s easier to get extra value via hacking if your most important values are something very specific and somewhat measurable). I think that might also be the usual philosopher’s definition. But then Eliezer (in the metaethics sequence) used “utilitarianism” to mean a general approach to ethics where you add up all the values involved and pick the best outcome, regardless of what your values are and how you weight them. So it’s sometimes a little confusing to know what utilitarianism means around here.
(Edited for spelling.)
I do not believe Eliezer makes that mistake.
I might have misremembered. Sorry about that.
I don’t understand. One of those things is “compare the options, and choose the one with the best consequences”. What are the other things?
You are illustrating the issue :-) That is consequentialism, not utilitarianism.
Differences arise when you try to flesh out what “best consequences” means. A lot of people on this site seem to think utilitarianism interprets “best consequences” as “best consequences according to your own utility function”. This is actually not what ethicists mean when they talk about utilitarianism. They might mean something like “best consequences according to some aggregation of the utility functions of all agents” (where there is disagreement about what the right aggregation mechanism is or what counts as an agent). Or they might interpret “best consequences” as “consequences that maximize the aggregate pleasure experienced by agents” (usually treating suffering as negative pleasure). Other interpretations also exist.
As far as I’ve read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that’s what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.
I agree that the term tends to be used too broadly around here—probably because the term sounds like it points to something along the lines of “an ethic based on evaluating a utility function against options”, which is actually closer to a working definition of consequentialism. It’s not a word that’s especially well defined, though, even in philosophy.
“Compare the options, and choose the one that results in the greatest (pleasure—suffering).”