Amartya Sen argues (it’s discussed in his Nobel prize lecture: http://www.nobelprize.org/nobel_prizes/economics/laureates/1998/sen-lecture.pdf) that social choice theory requires making some interpersonal comparisons of utility, as without some such comparisons there is no way to evaluate the utility of total outcomes. However, the interpersonal comparisons do not need to be unlimited; just having some of them can be enough. Since interpersonal comparisons certainly do raise issues, they doubtless require some restrictions similar to those you mention for the individual case, which seems to be why Sen takes it as a very good thing that restricted interpersonal comparisons may be sufficient.
I think that interpersonal “utility” is a different beast from VNM utility. VNM is fundamentally about sovereign preferences, not preferences within an aggregation.
Inside moral philosophy we have an intuition that we ought to aggregate preferences of other people, and we might think that using VNM is a good idea because it is about preferences too, but I think this is an error, because VNM isn’t about preferences in that way.
We need a new thing built from the ground up for utilitarian preference aggregation. It may turn out to have similarities to VNM, but I would be very surprised if it actually was VNM.
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we’ve discussed ad nauseam before)? In response to an argument of Harsanyi’s that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
ETA: I worry that I’ve unduly maligned Harsanyi by associating his argument too heavily with Phil’s post. Although I still think it’s wrong, Harsanyi’s argument is rather more sophisticated than Phil’s, and worth checking out if you’re at all interested in this area.
Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
This is the same ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population
No, not at all. You can’t derive mathematical results by playing word games. Even if you could, it doesn’t even make sense to take the average utility of a population. Different utility functions are not commensurable.
This is clearer if you use a many-worlds interpretation, and think of maximizing expected value over possible futures as applying average utilitarianism to the population of all possible future yous.
No. That is not at all how it works. A deterministic coin toss will end up the same in all everett branches, but have subjective probability distributed between two possible worlds. You can’t conflate them; they are not the same.
Having your math rely on a misinterpreted physical theory is generally a bad sign...
Therefore, I think that, if the 4 axioms are valid when calculating U(lottery), they are probably also valid when calculating not our private utility, but a social utility function s(outcome), which sums over people in a similar way to how U(lottery) sums over possible worlds.
Really? Translate the axioms into statements about people. Do they still seem reasonable?
Completeness. Doesn’t hold. Preferred by who? The fact that we have a concept of “pareto optimal” should raise your suspicions.
Transitivity. Assuming you can patch Completeness to deal with pareto-optimality, this may or may not hold. Show me the math.
Continuity. Assuming we let population frequency or some such stand in for probability. I reject the assumption that strict averaging by population is valid. So much for reasonable assumptions.
Independence. Adding another subpopulation to all outcomes is not necessarily a no-op.
Other problems include the fact that population can change, while the sum of probabilities is always 1. The theorem probably relies on this.
Assuming you could construct some kind of coherent population-averaging theory from this, it would not involve utility or utility functions. It would be orthogonal to that, and would have to be able to take into account egalitarianism and population change, and varying moral importance of agents and such.
It is even more shocking that it is thus possible to prove, given reasonable assumptions, which type of utilitarianism is correct.
While I’m in broad agreement with you here, I’d nitpick on a few things.
Different utility functions are not commensurable.
Agree that decision-theoretic or VNM utility functions are not commensurable—they’re merely mathematical representations of different individuals’ preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow’s impossibility theorem).
Translate the axioms into statements about people. Do they still seem reasonable?
I’m actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it’s 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)
That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn’t have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen’s main point).
But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons.
I ignore it because they are entirely different concepts. I also ignore aerodynamics in this discussion. It is really unfortunate that we use the same word for them. It is further unfortunate that even LWers can’t distinguish between an apple and an orange if you call them both “apple”.
“That for which the calculus of expectation is legitimate” is simply not related to inter-agent preference aggregation.
I’m hesitant to get into a terminology argument when we’re in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it’s annoying when people use the word ‘fruit’ to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I’d suggest that it’s not the most useful response to this problem to insist on using the word ‘fruit’ to refer exclusively to apples, and to proceed to make claims like ‘fruit can’t be orange coloured’ that are false for some types of fruit. (Even more so when people have been using the word ‘fruit’ to refer to oranges for longer than they’ve been using it to refer to apples.) Aren’t you just making it more difficult for people to get your point that apples and oranges are different?
On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you’re really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it’s easier to instead phrase your claims as being about apples and oranges directly when they’re intended to apply to only one type of fruit?
P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.
“Fruit” is a natural category; apples and oranges share interesting characteristics that make it useful to talk about them in general.
“Utility” is not. The two concepts, “that for which expectation is legitimate”, and some quantity related to inter-agent preference aggregation do not share very many characteristics, and they are not even on the same conceptual abstraction layer.
The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that boundary.
As for me using the word “utility” in this discussion, I think it should be unambiguous that I am speaking of VNM-stuff, because the OP is about VNM, and utilitarianism and VNM do not belong in the same discussion, so you can infer that all uses of “utility” refer to the same thing. Nevertheless, I will try to come up with a less ambiguous word to refer to the output of a “preference function”.
The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that boundary.
But surely the intuition that value ought to be aggregated linearly across “possible outcomes” is related to the intuition that value ought to be aggregated linearly across “individuals”? I think it basically comes down to independence: how much something (a lottery over possible outcomes / a set of individuals) is valued should be independent of other things (other parts of the total probabilistic mixture over outcomes / other individuals who exist).
When framed this way, the two problems in decision theory and moral philosophy can be merged together as the question of “where should one draw the boundary between things that are valued independently?” and the general notion of “utility” as “representation of preference that can be evaluated on certain objects independently of others and then aggregated linearly” does seem to have value.
There is no value to a superconcept that crosses that boundary.
This doesn’t seem to me to argue in favour of using wording that’s associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn’t belong there.
Two additional things, FWIW:
(1) There’s a lot of existing literature that distinguishes between “decision utility” and “experienced utility” (where “decision utility” corresponds to preference representation) so there is an existing terminology already out there. (Although “experienced utility” doesn’t necessarily have anything to do with preference or welfare aggregation either.)
(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it’s problematic. (Not sure that’s what you intend—but it’s one interpretation of your words in this comment.) Even Harsanyi’s argument, while flawed, is interesting in this regard (it’s much more sophisticated than Phil’s post, so I’d recommend checking it out if you haven’t already.)
Amartya Sen argues (it’s discussed in his Nobel prize lecture: http://www.nobelprize.org/nobel_prizes/economics/laureates/1998/sen-lecture.pdf) that social choice theory requires making some interpersonal comparisons of utility, as without some such comparisons there is no way to evaluate the utility of total outcomes. However, the interpersonal comparisons do not need to be unlimited; just having some of them can be enough. Since interpersonal comparisons certainly do raise issues, they doubtless require some restrictions similar to those you mention for the individual case, which seems to be why Sen takes it as a very good thing that restricted interpersonal comparisons may be sufficient.
I think that interpersonal “utility” is a different beast from VNM utility. VNM is fundamentally about sovereign preferences, not preferences within an aggregation.
Inside moral philosophy we have an intuition that we ought to aggregate preferences of other people, and we might think that using VNM is a good idea because it is about preferences too, but I think this is an error, because VNM isn’t about preferences in that way.
We need a new thing built from the ground up for utilitarian preference aggregation. It may turn out to have similarities to VNM, but I would be very surprised if it actually was VNM.
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we’ve discussed ad nauseam before)? In response to an argument of Harsanyi’s that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
If not, some useful references here.
ETA: I worry that I’ve unduly maligned Harsanyi by associating his argument too heavily with Phil’s post. Although I still think it’s wrong, Harsanyi’s argument is rather more sophisticated than Phil’s, and worth checking out if you’re at all interested in this area.
Oh wow.
No, not at all. You can’t derive mathematical results by playing word games. Even if you could, it doesn’t even make sense to take the average utility of a population. Different utility functions are not commensurable.
No. That is not at all how it works. A deterministic coin toss will end up the same in all everett branches, but have subjective probability distributed between two possible worlds. You can’t conflate them; they are not the same.
Having your math rely on a misinterpreted physical theory is generally a bad sign...
Really? Translate the axioms into statements about people. Do they still seem reasonable?
Completeness. Doesn’t hold. Preferred by who? The fact that we have a concept of “pareto optimal” should raise your suspicions.
Transitivity. Assuming you can patch Completeness to deal with pareto-optimality, this may or may not hold. Show me the math.
Continuity. Assuming we let population frequency or some such stand in for probability. I reject the assumption that strict averaging by population is valid. So much for reasonable assumptions.
Independence. Adding another subpopulation to all outcomes is not necessarily a no-op.
Other problems include the fact that population can change, while the sum of probabilities is always 1. The theorem probably relies on this.
Assuming you could construct some kind of coherent population-averaging theory from this, it would not involve utility or utility functions. It would be orthogonal to that, and would have to be able to take into account egalitarianism and population change, and varying moral importance of agents and such.
Shocking indeed.
While I’m in broad agreement with you here, I’d nitpick on a few things.
Agree that decision-theoretic or VNM utility functions are not commensurable—they’re merely mathematical representations of different individuals’ preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow’s impossibility theorem).
I’m actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it’s 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)
That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn’t have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen’s main point).
I ignore it because they are entirely different concepts. I also ignore aerodynamics in this discussion. It is really unfortunate that we use the same word for them. It is further unfortunate that even LWers can’t distinguish between an apple and an orange if you call them both “apple”.
“That for which the calculus of expectation is legitimate” is simply not related to inter-agent preference aggregation.
I’m hesitant to get into a terminology argument when we’re in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it’s annoying when people use the word ‘fruit’ to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I’d suggest that it’s not the most useful response to this problem to insist on using the word ‘fruit’ to refer exclusively to apples, and to proceed to make claims like ‘fruit can’t be orange coloured’ that are false for some types of fruit. (Even more so when people have been using the word ‘fruit’ to refer to oranges for longer than they’ve been using it to refer to apples.) Aren’t you just making it more difficult for people to get your point that apples and oranges are different?
On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you’re really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it’s easier to instead phrase your claims as being about apples and oranges directly when they’re intended to apply to only one type of fruit?
P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.
“Fruit” is a natural category; apples and oranges share interesting characteristics that make it useful to talk about them in general.
“Utility” is not. The two concepts, “that for which expectation is legitimate”, and some quantity related to inter-agent preference aggregation do not share very many characteristics, and they are not even on the same conceptual abstraction layer.
The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that boundary.
As for me using the word “utility” in this discussion, I think it should be unambiguous that I am speaking of VNM-stuff, because the OP is about VNM, and utilitarianism and VNM do not belong in the same discussion, so you can infer that all uses of “utility” refer to the same thing. Nevertheless, I will try to come up with a less ambiguous word to refer to the output of a “preference function”.
But surely the intuition that value ought to be aggregated linearly across “possible outcomes” is related to the intuition that value ought to be aggregated linearly across “individuals”? I think it basically comes down to independence: how much something (a lottery over possible outcomes / a set of individuals) is valued should be independent of other things (other parts of the total probabilistic mixture over outcomes / other individuals who exist).
When framed this way, the two problems in decision theory and moral philosophy can be merged together as the question of “where should one draw the boundary between things that are valued independently?” and the general notion of “utility” as “representation of preference that can be evaluated on certain objects independently of others and then aggregated linearly” does seem to have value.
This doesn’t seem to me to argue in favour of using wording that’s associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn’t belong there.
Two additional things, FWIW:
(1) There’s a lot of existing literature that distinguishes between “decision utility” and “experienced utility” (where “decision utility” corresponds to preference representation) so there is an existing terminology already out there. (Although “experienced utility” doesn’t necessarily have anything to do with preference or welfare aggregation either.)
(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it’s problematic. (Not sure that’s what you intend—but it’s one interpretation of your words in this comment.) Even Harsanyi’s argument, while flawed, is interesting in this regard (it’s much more sophisticated than Phil’s post, so I’d recommend checking it out if you haven’t already.)