This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
I believe this line of the grandparent discusses what you’re discussing:
If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results.
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Betty and Veronica don’t need to know of one another.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).
I can have
This will compel me, on the day one, to compare different ways how I can organize the world, and adopt one that has the future where veronica gets more balloons, but not excessively so (as giving all to veronica has utility of 0). Note: the ‘world’ defines it’s future. As a consequence, I’d allocate a balloon counter, write up a balloon schedule, or the like.
I don’t see how is that at odds with expected utility maximization. If it were at odds, I’d expect you to be able to come up with a Dutch Book style scenario demonstrating some inconsistency between my choices (and I would expect myself to be able to come with such scenario).
It’s compatible with utility maximization (you have a utility function and you’re maximizing it) but it’s not compatible with world utility maximization, which is required for utilitarianism.
That utility function takes world as an input, I’m not sure what you mean by “world utility maximization”.
The maximization of the sum (or average) of the utilities of all beings in the world.
I believe this line of the grandparent discusses what you’re discussing:
Betty and Veronica don’t need to know of one another. The formula I gave produces rather silly results, but the point is that you can consistently define the utility of a world state in such a way that it intrinsically values equality.
Right, then blacktrance’s complaint holds that you’re not just adding up the utilities of all the agents in the world, which is a condition of utilitarianism.
PhilGoetz was trying to show that to be correct or necessary, from some first principles not inclusive of simple assertion of such. Had his point been “average utilitarianism must be correct because summation is a condition of utilitarianism”, I wouldn’t have bothered replying (and he wouldn’t have bothered writing a long post).
Besides, universe is not made of “agents”, an “agent” is just a loosely fitting abstraction that falls apart if you try to zoom in at the details. And summation of agent’s utility across agents is entirely nonsensical for the reason that utility is only defined up to a positive affine transformation.
edit: also, hedonistic utilitarianism, at least as originally conceived, sums pleasure, rather than utility. Those are distinct, in that pleasure may be numerically quantifiable—we may one day have a function that looks at some high resolution 3d image, and tells how much pleasure the mechanism depicted in that image is feeling (a real number that can be compared across distinct structures).