You’re right that this is the axiom that’s starkly nonobvious in Phil’s attempted application (by analogy) of the theorem. I’d go further, and say that it basically amounts to assuming the controversial bit of what Phil is seeking to prove.
And I’ll go further still and suggest that in the original von Neumann-Morgenstern theorem, this axiom is again basically smuggling in a key part of the conclusion, in exactly the same way. (Is it obviously irrational to seek to reduce the variance in the outcomes that you face? vN-M are effectively assuming that the answer is yes. Notoriously, actual human preferences typically have features like that.)
And my final conclusion is, then: Either become an average utilitarian; or stop describing rationality as expectation maximization.
That’s unwarranted. Axioms are being applied to describe very different processes, so you should look at their applications separately. In any case, reaching a “final conclusion” without an explicit write-up (or discovering a preexisting write-up) to check the sanity of conclusion is in most cases a very shaky step, predictably irrational.
Okay: Suppose you have two friends, Betty and Veronica, and one balloon. They both like balloons, but Veronica likes them a little bit more. Therefore, you give the balloon to Veronica.
You get one balloon every day. Do you give it to Veronica every day?
Ignore whether Betty feels slighted by never getting a balloon. If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results. The claim that inequity is a problem in average utilitarianism does not depend on the subjects perceiving the inequity.
Just to be clear about it, Betty and Veronica live in a nursing home, and never remember who got the balloon previously.
You might be tempted to adopt a policy like this:
p(v) = .8, p(b) = .2,
meaning you give the balloon to Veronica eight times out of 10. But the axiom of independence assumes that it is better to use the policy
p(v) = 1, p(b) = 0.
This is straightforward application of the theorem, without any mucking about with possible worlds. Are you comfortable with giving Veronica the balloon every day? Or does valuing equity mean that expectation maximization is wrong? I think those are the only choices.
This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
I believe this line of the grandparent discusses what you’re discussing:
If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results.
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Betty and Veronica don’t need to know of one another.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).
Imagine that instead of balloons you’re giving food. Veronica has no food source and a day’s worth of food has a high utility to her—she’d go hungry without it. Betty has a food source, but the food is a little bland, and she would still gain some small amount of utility from being given food. Today you have one person-day worth of food and decide that Veronica needs it more, so you give it to Veronica. Repeat ad nauseum; every day you give Veronica food but give Betty nothing.
This scenario is basically the same as yours, but with food instead of balloons—yet in this scenario most people would be perfectly happy with the idea that only Veronica gets anything.
Alternatively, Veronica and Betty both have secure food sources. Veronica’s is slightly more bland relative to her preferences than Betty’s. A simple analysis yields the same result: you give the rations to Veronica every day.
Of course, if you compare across the people’s entire lives, you would find yourself switching between the two, favoring Veronica slightly. And if Veronica would have no food without your charity, you might have her go hungry on rare occasions in order to improve Betty’s food for a day.
This talks about whether you should analyze the delta utility of an action versus the end total utility of people. It doesn’t talk about, when deciding what to do with a population, you should use average utility per person versus total utility of the population in your cost function. That second problem only crops up when deciding whether to add or remove people from a population—average utilitarianism in that sense recommends killing people who are happy with their lives but not as happy as average, while total utilitarianism would recommend increasing the population to the point of destitution and near-starvation as long as it could be done efficiently enough.
The point is that the “most people wouldn’t like this” test fails.
It’s just not true that always giving to one person and never giving to another person is a situation that most people would, as a rule, object to Most people would sometimes oibject, and sometimes not, depending on circumstances—they’d object when you’re giving toys such as balloons, but they won’t object when you’re giving necessities such as giving food to the hungry.
Pointing out an additional situation when most people would object (giving food when the food is not a necessity) doesn’t change this.
We haven’t proved that you must either become an average utilitarian, or stop describing rationality as expectation maximization. But we’ve shown that there are strong reasons to believe that proposition. Without equally strong reasons to doubt it, it is in most cases rational to act as if it were true (depending on the utility of its truth or falsehood).
(And, yes, I’m in danger of falling back into expectation maximization in that last sentence. I don’t know what else to do.)
Please read it. Even if you don’t agree with it, it should at the very least give you an appreciation that there are strong reasons to doubt your conclusion, and that there are people smarter/more knowledgeable about this than either of us who would not accept it. (For my part, learning that John Broome thinks there could be something to the argument has shifted my credence in it slightly, even if Weymark ultimately concludes that Broome’s argument doesn’t quite work.)
The discussion is framed around Harsanyi’s axiomatic “proof” of utilitarianism, but I’m fairly sure that if Harsanyi’s argument fails for the reasons discussed, then so will yours.
EDIT: I’d very much like to know whether (a) reading this shifts your estimate of either (i) whether your argument has provided strong reasons for anything, or (ii) whether utilitarianism is true (conditional on expectation maximization being rational); and (b) if not, why not?
I haven’t read it yet. I’ll probably go back and change the word “strong”; it is too subjective, and provokes resistance, and is a big distraction. People get caught up protesting that the evidence isn’t “strong”, which I think is beside the point. Even weak evidence for the argument I’m presenting should still be very interesting.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with. I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be. It is generally a good practice to do every available verification routine, where it helps to find your way in the murky pond of weakly predictable creativity.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with.
I really only need a preponderance of evidence for one side (utilities being equal). If have a jar with 100 coins in it and you ask me to bet on a coin flip, and I know that one coin in the jar has two heads on it, I should bet heads. And you have to bet in this case—you have to have some utility function, if you’re claiming to be a rational utility-maximizer.
The fact that I have given any reason at all to think that you have to choose between being an average utilitarian, or stop defining rationality as expectation maximization, is in itself interesting, because of the extreme importance of the subject.
I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be.
Do you mean that you don’t see anything in the original argument, or in some further discussion of the original argument?
If you “don’t see anything salient”, then identify a flaw in my argument. Otherwise, you’re just saying, “I can’t find any problems with your argument, but I choose not to update anyway.”
I’m sympathetic to this, but I’m not sure it’s entirely fair. It probably just means you’re talking past each other. It’s very difficult to identify specific flaws in an argument when you just don’t see how it is supposed to be relevant to the supposed conclusion.
If this were a fair criticism of Vladimir, then I think it would also be a fair criticism of you. I’ve provided what I view as extensive, and convincing (to me! (and to Amartya Sen)) criticisms of your argument, to which you general response has been, not to point out a flaw in my argument, but instead to say “I don’t see how this is relevant”.
This is incredibly frustrating to me, just as Vladimir’s response probably seems frustrating to you. But I’d like to think it’s more a failure of communication than it is bloody-mindedness on your or Vladimir’s part.
Fair enough. It sounded to me like Vladimir was saying something like, “I think your argument is all right; but now I want another argument to support the case for actually applying your argument”.
I haven’t read that paper you referenced yet. If you have others that are behind firewalls, I can likely get a copy for us.
You’re right that this is the axiom that’s starkly nonobvious in Phil’s attempted application (by analogy) of the theorem. I’d go further, and say that it basically amounts to assuming the controversial bit of what Phil is seeking to prove.
And I’ll go further still and suggest that in the original von Neumann-Morgenstern theorem, this axiom is again basically smuggling in a key part of the conclusion, in exactly the same way. (Is it obviously irrational to seek to reduce the variance in the outcomes that you face? vN-M are effectively assuming that the answer is yes. Notoriously, actual human preferences typically have features like that.)
I think the two comments above by Wei Dai and gjm are SPOT ON. Thank you.
And my final conclusion is, then:
Either become an average utilitarian; or stop defining rationality as expectation maximization.
That’s unwarranted. Axioms are being applied to describe very different processes, so you should look at their applications separately. In any case, reaching a “final conclusion” without an explicit write-up (or discovering a preexisting write-up) to check the sanity of conclusion is in most cases a very shaky step, predictably irrational.
Okay: Suppose you have two friends, Betty and Veronica, and one balloon. They both like balloons, but Veronica likes them a little bit more. Therefore, you give the balloon to Veronica.
You get one balloon every day. Do you give it to Veronica every day?
Ignore whether Betty feels slighted by never getting a balloon. If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results. The claim that inequity is a problem in average utilitarianism does not depend on the subjects perceiving the inequity.
Just to be clear about it, Betty and Veronica live in a nursing home, and never remember who got the balloon previously.
You might be tempted to adopt a policy like this: p(v) = .8, p(b) = .2, meaning you give the balloon to Veronica eight times out of 10. But the axiom of independence assumes that it is better to use the policy p(v) = 1, p(b) = 0.
This is straightforward application of the theorem, without any mucking about with possible worlds. Are you comfortable with giving Veronica the balloon every day? Or does valuing equity mean that expectation maximization is wrong? I think those are the only choices.
I can have
This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
That utility function takes world as an input, I’m not sure what you mean by “world utility maximization”.
The maximization of the sum (or average) of the utilities of all beings in the world.
I believe this line of the grandparent discusses what you’re discussing:
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).
Imagine that instead of balloons you’re giving food. Veronica has no food source and a day’s worth of food has a high utility to her—she’d go hungry without it. Betty has a food source, but the food is a little bland, and she would still gain some small amount of utility from being given food. Today you have one person-day worth of food and decide that Veronica needs it more, so you give it to Veronica. Repeat ad nauseum; every day you give Veronica food but give Betty nothing.
This scenario is basically the same as yours, but with food instead of balloons—yet in this scenario most people would be perfectly happy with the idea that only Veronica gets anything.
Alternatively, Veronica and Betty both have secure food sources. Veronica’s is slightly more bland relative to her preferences than Betty’s. A simple analysis yields the same result: you give the rations to Veronica every day.
Of course, if you compare across the people’s entire lives, you would find yourself switching between the two, favoring Veronica slightly. And if Veronica would have no food without your charity, you might have her go hungry on rare occasions in order to improve Betty’s food for a day.
This talks about whether you should analyze the delta utility of an action versus the end total utility of people. It doesn’t talk about, when deciding what to do with a population, you should use average utility per person versus total utility of the population in your cost function. That second problem only crops up when deciding whether to add or remove people from a population—average utilitarianism in that sense recommends killing people who are happy with their lives but not as happy as average, while total utilitarianism would recommend increasing the population to the point of destitution and near-starvation as long as it could be done efficiently enough.
The point is that the “most people wouldn’t like this” test fails.
It’s just not true that always giving to one person and never giving to another person is a situation that most people would, as a rule, object to Most people would sometimes oibject, and sometimes not, depending on circumstances—they’d object when you’re giving toys such as balloons, but they won’t object when you’re giving necessities such as giving food to the hungry.
Pointing out an additional situation when most people would object (giving food when the food is not a necessity) doesn’t change this.
We haven’t proved that you must either become an average utilitarian, or stop describing rationality as expectation maximization. But we’ve shown that there are strong reasons to believe that proposition. Without equally strong reasons to doubt it, it is in most cases rational to act as if it were true (depending on the utility of its truth or falsehood).
(And, yes, I’m in danger of falling back into expectation maximization in that last sentence. I don’t know what else to do.)
Phil, I’ve finally managed to find a paper addressing this issue that doesn’t appear to be behind a paywall.
Weymark, John (2005) “Measurement Theory and the Foundations of Utilitarianism”
Please read it. Even if you don’t agree with it, it should at the very least give you an appreciation that there are strong reasons to doubt your conclusion, and that there are people smarter/more knowledgeable about this than either of us who would not accept it. (For my part, learning that John Broome thinks there could be something to the argument has shifted my credence in it slightly, even if Weymark ultimately concludes that Broome’s argument doesn’t quite work.)
The discussion is framed around Harsanyi’s axiomatic “proof” of utilitarianism, but I’m fairly sure that if Harsanyi’s argument fails for the reasons discussed, then so will yours.
EDIT: I’d very much like to know whether (a) reading this shifts your estimate of either (i) whether your argument has provided strong reasons for anything, or (ii) whether utilitarianism is true (conditional on expectation maximization being rational); and (b) if not, why not?
I haven’t read it yet. I’ll probably go back and change the word “strong”; it is too subjective, and provokes resistance, and is a big distraction. People get caught up protesting that the evidence isn’t “strong”, which I think is beside the point. Even weak evidence for the argument I’m presenting should still be very interesting.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with. I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be. It is generally a good practice to do every available verification routine, where it helps to find your way in the murky pond of weakly predictable creativity.
I really only need a preponderance of evidence for one side (utilities being equal). If have a jar with 100 coins in it and you ask me to bet on a coin flip, and I know that one coin in the jar has two heads on it, I should bet heads. And you have to bet in this case—you have to have some utility function, if you’re claiming to be a rational utility-maximizer.
The fact that I have given any reason at all to think that you have to choose between being an average utilitarian, or stop defining rationality as expectation maximization, is in itself interesting, because of the extreme importance of the subject.
Do you mean that you don’t see anything in the original argument, or in some further discussion of the original argument?I’m sympathetic to this, but I’m not sure it’s entirely fair. It probably just means you’re talking past each other. It’s very difficult to identify specific flaws in an argument when you just don’t see how it is supposed to be relevant to the supposed conclusion.
If this were a fair criticism of Vladimir, then I think it would also be a fair criticism of you. I’ve provided what I view as extensive, and convincing (to me! (and to Amartya Sen)) criticisms of your argument, to which you general response has been, not to point out a flaw in my argument, but instead to say “I don’t see how this is relevant”.
This is incredibly frustrating to me, just as Vladimir’s response probably seems frustrating to you. But I’d like to think it’s more a failure of communication than it is bloody-mindedness on your or Vladimir’s part.
Fair enough. It sounded to me like Vladimir was saying something like, “I think your argument is all right; but now I want another argument to support the case for actually applying your argument”.
I haven’t read that paper you referenced yet. If you have others that are behind firewalls, I can likely get a copy for us.