tl;dr: “Utility” is used on LW to mean what people want, but that’s not what’s morally relevant. Utilitarians aren’t trying to maximise this sort of utility, but rather “well-being”.
Epistemic status: probably obvious to some, but this particular framing wasn’t totally clear to me until recently, and the terminology is definitely ambiguous.
Use of the term “utility” on Less Wrong implicitly conflates two definitions. Consider person X. In the economic sense, X’s utility corresponds to the things that X would choose to maximise; we can abstract this as a “utility function” which maps possible worlds to a real number. For example, if X would save the lives of their family even at the cost of their own life, then we’d say that X assigns higher utility to a world in which their family lives happily than one in which they do. This is perfectly reasonable and normal. (Some people argue that X is actually prioritising their own happiness because if they chose otherwise, they’d be miserable from guilt. But this seems like an implausible model of their actual reasoning; I don’t think many people who would save their families over themselves would change their minds even if offered guaranteed happiness afterwards.) A similar definition of utility is used when reasoning about artificial agents; for example, LW Wiki says “Utility is how much a certain outcome satisfies an agent’s preferences”.
However, this makes it very confusing to talk about maximising utility as a moral goal. Taken literally, maximising (economic) utility means wanting the sum of all people’s utility functions to be as high as possible. (Edit: in the standard definition of economic utility, this is not well-defined, since utilities can’t be compared between people. The following argument is one intuitive reason why we can’t maximise even versions of economic-style utility which do allow interpersonal comparision, such as the ones I’ll discuss later.) But by doing so, we are double-counting! Let’s say I assign utility U to living a happy life, and utility U+1 to my wife living a happy life; my wife does the converse. If we both have happy lives, we have total utility 4U+2, which means that our lives should be prioritised over the lives of four other people who value their own lives just as highly, but don’t care much about other people! This is bizarre, and gets more so when we consider that people might have many strong relationships. By this calculation method, a family of five people who all value each other over themselves have more total utility than 25 equally happy loners. Obviously maximising this sort of utility is not what standard utilitarians want.
By contrast, “utility” as used in the context of utilitarianism and ethics in general (which I will from now on call well-being) is a metric of how good a life is for the person living it. There are various accounts of well-being; the two most prominent types are desire theories, and hedonic theories. Under the former, a person has high well-being if the things they desire actually occur, even if they never find out about them. This is basically the same as the definition of utility I outlined above—which means it faces exactly the same double-counting problem. Hedonic theories of well-being, on the other hand, imply that your well-being is a function of only your psychological state. There are many different functions that it could be—for example, ones which only care about suffering; or also care about pleasure; or also care about a sense of fulfillment and meaningfulness. The specifics don’t matter for our purposes; let’s accept the broad idea and see where it leads.
Unfortunately, it immediately leads us to a major problem: since well-being is distinct from utility, people’s actions aren’t a good guide to their actual well-being function. In fact, maximising the well-being of any group of people might be opposed by every person who is affected by the change! Consider first a group of size one: just me. Suppose my life’s goal is to write the greatest novel ever, even though I know that slaving away to complete it will make me less happy than I could have been. I also know that if I ever stop working on it, I’ll become lazy, my goals will change, and I’ll settle for a happy but boring life. You decide that you could maximise my well-being by forcing me to stop working on it—and by the account above, you’d be doing a moral good even though I’d fight you tooth and nail.
One more example, this time with n=2: suppose I am about to suffer torture. Suppose also that I have a wife, who I love deeply, although she doesn’t love me nearly as much; also, she has a higher pain tolerance than me. Now you intervene so that instead of me being tortured, my wife is tortured instead, without my knowledge. My well-being is now higher than it would have been, and the total well-being between the two of us is also higher (since she can bear the pain better). Yet if either of us heard about your plan, we would both strongly object.
Some people are willing to bite the bullet and say that we should just maximise hedonic well-being even if all people we are “benefiting” think we’re making their lives worse. This implies that, all else being equal, it would be better to force everyone into experience machines, because psychological experiences are all that matter. At a certain point, accepting or rejecting this position comes down to a brute clash of intuitions. I think that that my life would have less value if all my friends were secretly contemptuous of me, and all the things I learned throughout my life were actually wrong, and after my death I was despised—even if I never found out about any of those facts. Your mileage may vary.
The best compromise I can come up with is a solution in which your well-being is the sum of a desire-satisfaction function and a hedonic function—but where the desires we consider are limited to those about your own life. As always with morality, this is somewhat vague. For example, you might desire to have a child, and desire that the child has certain traits, and go into a certain career, and have a good life. Where does this stop becoming “about you”? I don’t think there’s any clear line to be drawn between desires that are and aren’t about your own life, but if we want people’s desires to be morally relevant in a sensible way, we need to pick some boundary; even if they are all well-informed and reflectively consistent, we can’t just classify them all as part of the “utility function” which should be maximised.
What we talk about when we talk about maximising utility
tl;dr: “Utility” is used on LW to mean what people want, but that’s not what’s morally relevant. Utilitarians aren’t trying to maximise this sort of utility, but rather “well-being”.
Epistemic status: probably obvious to some, but this particular framing wasn’t totally clear to me until recently, and the terminology is definitely ambiguous.
Use of the term “utility” on Less Wrong implicitly conflates two definitions. Consider person X. In the economic sense, X’s utility corresponds to the things that X would choose to maximise; we can abstract this as a “utility function” which maps possible worlds to a real number. For example, if X would save the lives of their family even at the cost of their own life, then we’d say that X assigns higher utility to a world in which their family lives happily than one in which they do. This is perfectly reasonable and normal. (Some people argue that X is actually prioritising their own happiness because if they chose otherwise, they’d be miserable from guilt. But this seems like an implausible model of their actual reasoning; I don’t think many people who would save their families over themselves would change their minds even if offered guaranteed happiness afterwards.) A similar definition of utility is used when reasoning about artificial agents; for example, LW Wiki says “Utility is how much a certain outcome satisfies an agent’s preferences”.
However, this makes it very confusing to talk about maximising utility as a moral goal. Taken literally, maximising (economic) utility means wanting the sum of all people’s utility functions to be as high as possible. (Edit: in the standard definition of economic utility, this is not well-defined, since utilities can’t be compared between people. The following argument is one intuitive reason why we can’t maximise even versions of economic-style utility which do allow interpersonal comparision, such as the ones I’ll discuss later.) But by doing so, we are double-counting! Let’s say I assign utility U to living a happy life, and utility U+1 to my wife living a happy life; my wife does the converse. If we both have happy lives, we have total utility 4U+2, which means that our lives should be prioritised over the lives of four other people who value their own lives just as highly, but don’t care much about other people! This is bizarre, and gets more so when we consider that people might have many strong relationships. By this calculation method, a family of five people who all value each other over themselves have more total utility than 25 equally happy loners. Obviously maximising this sort of utility is not what standard utilitarians want.
By contrast, “utility” as used in the context of utilitarianism and ethics in general (which I will from now on call well-being) is a metric of how good a life is for the person living it. There are various accounts of well-being; the two most prominent types are desire theories, and hedonic theories. Under the former, a person has high well-being if the things they desire actually occur, even if they never find out about them. This is basically the same as the definition of utility I outlined above—which means it faces exactly the same double-counting problem. Hedonic theories of well-being, on the other hand, imply that your well-being is a function of only your psychological state. There are many different functions that it could be—for example, ones which only care about suffering; or also care about pleasure; or also care about a sense of fulfillment and meaningfulness. The specifics don’t matter for our purposes; let’s accept the broad idea and see where it leads.
Unfortunately, it immediately leads us to a major problem: since well-being is distinct from utility, people’s actions aren’t a good guide to their actual well-being function. In fact, maximising the well-being of any group of people might be opposed by every person who is affected by the change! Consider first a group of size one: just me. Suppose my life’s goal is to write the greatest novel ever, even though I know that slaving away to complete it will make me less happy than I could have been. I also know that if I ever stop working on it, I’ll become lazy, my goals will change, and I’ll settle for a happy but boring life. You decide that you could maximise my well-being by forcing me to stop working on it—and by the account above, you’d be doing a moral good even though I’d fight you tooth and nail.
One more example, this time with n=2: suppose I am about to suffer torture. Suppose also that I have a wife, who I love deeply, although she doesn’t love me nearly as much; also, she has a higher pain tolerance than me. Now you intervene so that instead of me being tortured, my wife is tortured instead, without my knowledge. My well-being is now higher than it would have been, and the total well-being between the two of us is also higher (since she can bear the pain better). Yet if either of us heard about your plan, we would both strongly object.
Some people are willing to bite the bullet and say that we should just maximise hedonic well-being even if all people we are “benefiting” think we’re making their lives worse. This implies that, all else being equal, it would be better to force everyone into experience machines, because psychological experiences are all that matter. At a certain point, accepting or rejecting this position comes down to a brute clash of intuitions. I think that that my life would have less value if all my friends were secretly contemptuous of me, and all the things I learned throughout my life were actually wrong, and after my death I was despised—even if I never found out about any of those facts. Your mileage may vary.
The best compromise I can come up with is a solution in which your well-being is the sum of a desire-satisfaction function and a hedonic function—but where the desires we consider are limited to those about your own life. As always with morality, this is somewhat vague. For example, you might desire to have a child, and desire that the child has certain traits, and go into a certain career, and have a good life. Where does this stop becoming “about you”? I don’t think there’s any clear line to be drawn between desires that are and aren’t about your own life, but if we want people’s desires to be morally relevant in a sensible way, we need to pick some boundary; even if they are all well-informed and reflectively consistent, we can’t just classify them all as part of the “utility function” which should be maximised.