Being fair is not, in general, a VNM-rational thing to do.
Suppose you have an indivisible slice of pie, and six people who want to eat it. The fair outcome would be to roll a die to determine who gets the pie. But this is a probabilistic mixture of six deterministic outcomes which are equally bad from a fairness point of view.
Preferring a lottery to any of its outcomes is not VNM-rational (pretty sure it violates independence, but in any case it’s not maximizing expected utility).
We can make this stronger by supposing some people like pie more than others (but all of them still like pie). Now the lottery is strictly worse than giving the pie to the one who likes pie the most.
Although the result is still interesting, I think most preference aggregators violate Axiom 1, rather than Axiom 2, and this is not inherently horrible.
I’m pretty sure it’s possible to reach the same conclusion by removing the requirement that the aggregation be VNM-rational and strengthening axiom 2 to say that the aggregation must be Pareto-optimal with respect to all prior probability distributions over choices the aggregation might face. That is, “given any prior probability distribution over pairs of gambles the aggregation might have to choose between, there is no other possible aggregation that would be better for every agent in the population in expectation.” It’s possible we could even reach the same conclusion just by using some such prior distribution with certain properties, instead of all such distributions.
I don’t understand what your strengthened axiom means. Could you give an example of how, say, the take-the-min-of-all-expected-utilities aggregation fails to satisfy it?
(Or if it doesn’t I suppose it would be a counterexample, but I’m not insisting on that)
Lets say there are 3 possible outcomes: A, B, and C, and 2 agents: x and y. The utility functions are x(A)=0, x(B)=1, x(C)=4, y(A)=4, y(B)=1, y(C)=0.
One possible prior probability distribution over pairs of gambles is that there is a 50% chance that the aggregation will be asked to choose between A and B, and a 50% chance that the aggregation will be asked to choose between B and C (in this simplified case, all the anticipated “gambles” are actually certain outcomes). Your maximin aggregation would choose B in each case, so both agents anticipate an expected utility of 1. But the aggregation that maximizes the sum of each utility function would choose A in the first case and C in the second, and each agent would anticipate an expected utility of 2. Since both agents could agree that this aggregation is better than maximin, maximin is not Pareto optimal with respect to that probability distribution.
Upvoted for suggesting a good example. I had suspected my explanation might be confusing, and I should have thought to include an example.
Being fair is not, in general, a VNM-rational thing to do.
Suppose you have an indivisible slice of pie, and six people who want to eat it. The fair outcome would be to roll a die to determine who gets the pie. But this is a probabilistic mixture of six deterministic outcomes which are equally bad from a fairness point of view.
Preferring a lottery to any of its outcomes is not VNM-rational (pretty sure it violates independence, but in any case it’s not maximizing expected utility).
We can make this stronger by supposing some people like pie more than others (but all of them still like pie). Now the lottery is strictly worse than giving the pie to the one who likes pie the most.
Although the result is still interesting, I think most preference aggregators violate Axiom 1, rather than Axiom 2, and this is not inherently horrible.
I’m pretty sure it’s possible to reach the same conclusion by removing the requirement that the aggregation be VNM-rational and strengthening axiom 2 to say that the aggregation must be Pareto-optimal with respect to all prior probability distributions over choices the aggregation might face. That is, “given any prior probability distribution over pairs of gambles the aggregation might have to choose between, there is no other possible aggregation that would be better for every agent in the population in expectation.” It’s possible we could even reach the same conclusion just by using some such prior distribution with certain properties, instead of all such distributions.
I don’t understand what your strengthened axiom means. Could you give an example of how, say, the take-the-min-of-all-expected-utilities aggregation fails to satisfy it?
(Or if it doesn’t I suppose it would be a counterexample, but I’m not insisting on that)
Lets say there are 3 possible outcomes: A, B, and C, and 2 agents: x and y. The utility functions are x(A)=0, x(B)=1, x(C)=4, y(A)=4, y(B)=1, y(C)=0.
One possible prior probability distribution over pairs of gambles is that there is a 50% chance that the aggregation will be asked to choose between A and B, and a 50% chance that the aggregation will be asked to choose between B and C (in this simplified case, all the anticipated “gambles” are actually certain outcomes). Your maximin aggregation would choose B in each case, so both agents anticipate an expected utility of 1. But the aggregation that maximizes the sum of each utility function would choose A in the first case and C in the second, and each agent would anticipate an expected utility of 2. Since both agents could agree that this aggregation is better than maximin, maximin is not Pareto optimal with respect to that probability distribution.
Upvoted for suggesting a good example. I had suspected my explanation might be confusing, and I should have thought to include an example.
Thank you, I understand it now.