Okay: Suppose you have two friends, Betty and Veronica, and one balloon. They both like balloons, but Veronica likes them a little bit more. Therefore, you give the balloon to Veronica.
You get one balloon every day. Do you give it to Veronica every day?
Ignore whether Betty feels slighted by never getting a balloon. If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results. The claim that inequity is a problem in average utilitarianism does not depend on the subjects perceiving the inequity.
Just to be clear about it, Betty and Veronica live in a nursing home, and never remember who got the balloon previously.
You might be tempted to adopt a policy like this:
p(v) = .8, p(b) = .2,
meaning you give the balloon to Veronica eight times out of 10. But the axiom of independence assumes that it is better to use the policy
p(v) = 1, p(b) = 0.
This is straightforward application of the theorem, without any mucking about with possible worlds. Are you comfortable with giving Veronica the balloon every day? Or does valuing equity mean that expectation maximization is wrong? I think those are the only choices.
This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
I believe this line of the grandparent discusses what you’re discussing:
If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results.
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Betty and Veronica don’t need to know of one another.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).
Imagine that instead of balloons you’re giving food. Veronica has no food source and a day’s worth of food has a high utility to her—she’d go hungry without it. Betty has a food source, but the food is a little bland, and she would still gain some small amount of utility from being given food. Today you have one person-day worth of food and decide that Veronica needs it more, so you give it to Veronica. Repeat ad nauseum; every day you give Veronica food but give Betty nothing.
This scenario is basically the same as yours, but with food instead of balloons—yet in this scenario most people would be perfectly happy with the idea that only Veronica gets anything.
Alternatively, Veronica and Betty both have secure food sources. Veronica’s is slightly more bland relative to her preferences than Betty’s. A simple analysis yields the same result: you give the rations to Veronica every day.
Of course, if you compare across the people’s entire lives, you would find yourself switching between the two, favoring Veronica slightly. And if Veronica would have no food without your charity, you might have her go hungry on rare occasions in order to improve Betty’s food for a day.
This talks about whether you should analyze the delta utility of an action versus the end total utility of people. It doesn’t talk about, when deciding what to do with a population, you should use average utility per person versus total utility of the population in your cost function. That second problem only crops up when deciding whether to add or remove people from a population—average utilitarianism in that sense recommends killing people who are happy with their lives but not as happy as average, while total utilitarianism would recommend increasing the population to the point of destitution and near-starvation as long as it could be done efficiently enough.
The point is that the “most people wouldn’t like this” test fails.
It’s just not true that always giving to one person and never giving to another person is a situation that most people would, as a rule, object to Most people would sometimes oibject, and sometimes not, depending on circumstances—they’d object when you’re giving toys such as balloons, but they won’t object when you’re giving necessities such as giving food to the hungry.
Pointing out an additional situation when most people would object (giving food when the food is not a necessity) doesn’t change this.
Okay: Suppose you have two friends, Betty and Veronica, and one balloon. They both like balloons, but Veronica likes them a little bit more. Therefore, you give the balloon to Veronica.
You get one balloon every day. Do you give it to Veronica every day?
Ignore whether Betty feels slighted by never getting a balloon. If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results. The claim that inequity is a problem in average utilitarianism does not depend on the subjects perceiving the inequity.
Just to be clear about it, Betty and Veronica live in a nursing home, and never remember who got the balloon previously.
You might be tempted to adopt a policy like this: p(v) = .8, p(b) = .2, meaning you give the balloon to Veronica eight times out of 10. But the axiom of independence assumes that it is better to use the policy p(v) = 1, p(b) = 0.
This is straightforward application of the theorem, without any mucking about with possible worlds. Are you comfortable with giving Veronica the balloon every day? Or does valuing equity mean that expectation maximization is wrong? I think those are the only choices.
I can have
This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
That utility function takes world as an input, I’m not sure what you mean by “world utility maximization”.
The maximization of the sum (or average) of the utilities of all beings in the world.
I believe this line of the grandparent discusses what you’re discussing:
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).
Imagine that instead of balloons you’re giving food. Veronica has no food source and a day’s worth of food has a high utility to her—she’d go hungry without it. Betty has a food source, but the food is a little bland, and she would still gain some small amount of utility from being given food. Today you have one person-day worth of food and decide that Veronica needs it more, so you give it to Veronica. Repeat ad nauseum; every day you give Veronica food but give Betty nothing.
This scenario is basically the same as yours, but with food instead of balloons—yet in this scenario most people would be perfectly happy with the idea that only Veronica gets anything.
Alternatively, Veronica and Betty both have secure food sources. Veronica’s is slightly more bland relative to her preferences than Betty’s. A simple analysis yields the same result: you give the rations to Veronica every day.
Of course, if you compare across the people’s entire lives, you would find yourself switching between the two, favoring Veronica slightly. And if Veronica would have no food without your charity, you might have her go hungry on rare occasions in order to improve Betty’s food for a day.
This talks about whether you should analyze the delta utility of an action versus the end total utility of people. It doesn’t talk about, when deciding what to do with a population, you should use average utility per person versus total utility of the population in your cost function. That second problem only crops up when deciding whether to add or remove people from a population—average utilitarianism in that sense recommends killing people who are happy with their lives but not as happy as average, while total utilitarianism would recommend increasing the population to the point of destitution and near-starvation as long as it could be done efficiently enough.
The point is that the “most people wouldn’t like this” test fails.
It’s just not true that always giving to one person and never giving to another person is a situation that most people would, as a rule, object to Most people would sometimes oibject, and sometimes not, depending on circumstances—they’d object when you’re giving toys such as balloons, but they won’t object when you’re giving necessities such as giving food to the hungry.
Pointing out an additional situation when most people would object (giving food when the food is not a necessity) doesn’t change this.