My intuitive proposal for M.i’s negentropy is to take a fraction of the total entropy equal to M.i’s fraction of the total mass, and I see this is equal to the Shapley value. Does this continue to be the case for other negentropy(mass) functions?
(Regardless, Shapley’s algorithm seems more fair than mine.)
That’s a good question, and the answer turns out to be no. For example, suppose negentropy(mass) = mass^3 instead of mass^2. Then in sequence {Alice, Bob}, Alice’s MC = A^3 and Bob’s MC = 3 A^2 B + 3 A B^2 + B^3. Average MC for Alice is A^3 + 3⁄2 A^2 B + 3⁄2 A B^2.
In the proportional approach you suggest, Alice’s allocation would be (A+B)^3 * A/(A+B) = A (A+B)^2 = A^3 + 2 A^2 B + A B^2.
The difference is troubling, and I’ll have to think about what is going on.
First, by phrasing the question as negentropy(mass) you have assumed we’re talking about agents contributing some quantities of a fungible good, and a transform on the total of the good which yields utility. Proportional allocation doesn’t make any sense at all (AFAICT) if the contributions aren’t of a fungible good.
But let’s take those assumptions and run with them. Alice and Bob will contribute A and B of a fungible good. The total contribution will be A+B, which will yield u(A+B) utility. How much of the utility should be credited to Alice, and how much to Bob?
Shapely says Alice’s share of the credit should be u(A)/2 + u(A+B)/2 - u(B)/2. Proportional allocation says instead u(A+B) A/(A+B). When precisely are these equal? algebra*
[u(A) - u(B)] / u(A+B) = (A—B) / (A + B)
Well now that’s a lot of structure! I’m having trouble phrasing it as a simple heuristic that resonates intuitively with me… I don’t feel like I understand as a matter of some more basic principle why proportional allocation only makes sense when this holds. Can anyone help me out by translating the functional equation into some pithy English?
Is this the best structure for understanding it? I’m not sure, ’cause I still don’t intuit what’s going on, but it seems pretty promising to me.
(Edited to rearrange equations and add:) If we want proportional allocation to work, then comparing the difference in contributions between any two agents to the magnitude of the total contribution should be the same as comparing the difference in utility derivable from the agents alone to the magnitude of the total utility derivable from the agents together.
Sounds pretty but I’m not sure I intuit why it rather than another principle should hold here.
Differentiating your first equation by B at B=0 after rearranging the terms a bit, we get a differential equation:
u′(x=2u(x)/x-u’(0))
Solving the equation using the first method from this page yields
u(x=C_1x+C_2x^2)
This gives the necessary and sufficient condition for the Shapley value to coincide with proportional allocation for two participants. The result also holds for three or more participants because the Shapley value is linear with respect to the utility function. So yeah, Wei Dai just got lucky with that example.
(I’d nearly forgotten how to do this stuff. Thanks for the exercise!)
We’re left with the question: for a non-linear and non-quadratic utility function, what is fairer, proportional allocation, or Shapley Value? Among the fairness properties satisfied by Shapley Value, it appears that proportional allocation doesn’t satisfy “Equal Impact”, defined as follows (from page 161 of Moulin’s book):
The impact of removing agent j on agent i ’s share is the same as that of
removing agent i on agent j ’s share
With the cubic example, Bob’s impact on Alice with Shapley Value is 3⁄2 A^2 B + 3⁄2 A B^2, and with proportional allocation it’s 2 A^2 B + A B^2. So, is Equal Impact essential to fairness? I’m not sure...
I’m gonna ramble a bit to clear up my own thoughts, apologies if this sounds obvious...
The Shapley value is the only value operator defined on all coalitional games that satisfies some intuitive axioms. (They’re all very natural and don’t include “Equal Impact”.) Proportional allocation isn’t defined on all coalitional games, only a subset of them where you arbitrarily chose some numbers as players’ “contributions”. (A general coalitional game doesn’t come with that structure—it’s just a set of 2^N numbers that specifies the payoff for each possible cooperating clique.) After this arbitrary step, proportional allocation does seem to satisfy the same natural axioms that the Shapley value does. But you can’t expand it to all coalitional games coherently, because otherwise the intuitive axioms would force your “contribution” values to become Shapley values.
In general, I see no reason to use proportional allocation over the Shapley value. If each player suffers a loss of utility proportional to their individual contribution, or any other side effect with an arbitrary cost function, just include it in the SV calculation.
Ok, I think I can articulate a reason for my doubt about the Shapley Value. One nice property for a fair division method to have is that the players can’t game the system by transferring their underlying contributions to one another. That is, Alice and Eve shouldn’t be able to increase their total negentropy allocation (at Bob’s expense) by transferring matter from one to the other ahead of time. Proportional allocation satisfies this property, but Shapley Value doesn’t (unless it happens to coincide with proportional allocation).
If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it’s straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.
This still isn’t very satisfying. John Nash would have advised us to model the situation with free transfer as a game within some larger class of games and apply some general concept like the Shapley value to make the answer pop out. But I’m not yet sure how to do that.
If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it’s straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.
One difficulty with proportional allocation is deciding how to measure contributions. Do you divide proportionately to mass contributed, or do you divide proportionately to negentropy contributed?
In Shapley value, a coalition of Alice and Eve against Bob is given equal weight with the other two possible two-against-one coalitions. Yes, Shapley does permit ‘gaming; in ways that proportional allocation does not, but it treats all possible ‘gamings’ (or coalition structures) equally.
According to Moulin, there are several different sets of axioms that can be used to uniquely derive the Shapley Value, and Equal Impact is among them (it can be used to derive Shapley Value by itself, if I understand correctly).
The problem with all of those sets of axioms is that each set seems to include at least one axiom that isn’t completely intuitive. For example, using the terminology in the Wikipedia article, we can use Symmetry, Additivity and Null Player, and while Symmetry and Null Player seem perfectly reasonable, I’m not so sure about Additivity.
My intuitive proposal for M.i’s negentropy is to take a fraction of the total entropy equal to M.i’s fraction of the total mass, and I see this is equal to the Shapley value. Does this continue to be the case for other negentropy(mass) functions?
(Regardless, Shapley’s algorithm seems more fair than mine.)
That’s a good question, and the answer turns out to be no. For example, suppose negentropy(mass) = mass^3 instead of mass^2. Then in sequence {Alice, Bob}, Alice’s MC = A^3 and Bob’s MC = 3 A^2 B + 3 A B^2 + B^3. Average MC for Alice is A^3 + 3⁄2 A^2 B + 3⁄2 A B^2.
In the proportional approach you suggest, Alice’s allocation would be (A+B)^3 * A/(A+B) = A (A+B)^2 = A^3 + 2 A^2 B + A B^2.
The difference is troubling, and I’ll have to think about what is going on.
First, by phrasing the question as negentropy(mass) you have assumed we’re talking about agents contributing some quantities of a fungible good, and a transform on the total of the good which yields utility. Proportional allocation doesn’t make any sense at all (AFAICT) if the contributions aren’t of a fungible good.
But let’s take those assumptions and run with them. Alice and Bob will contribute A and B of a fungible good. The total contribution will be A+B, which will yield u(A+B) utility. How much of the utility should be credited to Alice, and how much to Bob?
Shapely says Alice’s share of the credit should be u(A)/2 + u(A+B)/2 - u(B)/2. Proportional allocation says instead u(A+B) A/(A+B). When precisely are these equal? algebra*
[u(A) - u(B)] / u(A+B) = (A—B) / (A + B)
Well now that’s a lot of structure! I’m having trouble phrasing it as a simple heuristic that resonates intuitively with me… I don’t feel like I understand as a matter of some more basic principle why proportional allocation only makes sense when this holds. Can anyone help me out by translating the functional equation into some pithy English?
Wait, maybe this will help: Alice, Bob, and Eve.
2⁄6 u(A) + 1⁄6 (u(A+B) - u(B)) + 1⁄6 (u(A+E) - U(E)) + 2⁄6 (u(A+B+E) - u(B+E)) = u(A+B+E) * A / (A+B+E)
algebra
[(2uA − 2uBE) + (uAB—uE) + (uAE—uB)] / uABE = [(2A—BE) + (A—E) + (A—B)] / ABE
Is this the best structure for understanding it? I’m not sure, ’cause I still don’t intuit what’s going on, but it seems pretty promising to me.
(Edited to rearrange equations and add:) If we want proportional allocation to work, then comparing the difference in contributions between any two agents to the magnitude of the total contribution should be the same as comparing the difference in utility derivable from the agents alone to the magnitude of the total utility derivable from the agents together.
Sounds pretty but I’m not sure I intuit why it rather than another principle should hold here.
Differentiating your first equation by B at B=0 after rearranging the terms a bit, we get a differential equation:
u′(x =2u(x)/x-u’(0))
Solving the equation using the first method from this page yields
u(x =C_1x+C_2x^2)
This gives the necessary and sufficient condition for the Shapley value to coincide with proportional allocation for two participants. The result also holds for three or more participants because the Shapley value is linear with respect to the utility function. So yeah, Wei Dai just got lucky with that example.
(I’d nearly forgotten how to do this stuff. Thanks for the exercise!)
Good work. :)
We’re left with the question: for a non-linear and non-quadratic utility function, what is fairer, proportional allocation, or Shapley Value? Among the fairness properties satisfied by Shapley Value, it appears that proportional allocation doesn’t satisfy “Equal Impact”, defined as follows (from page 161 of Moulin’s book):
With the cubic example, Bob’s impact on Alice with Shapley Value is 3⁄2 A^2 B + 3⁄2 A B^2, and with proportional allocation it’s 2 A^2 B + A B^2. So, is Equal Impact essential to fairness? I’m not sure...
I’m gonna ramble a bit to clear up my own thoughts, apologies if this sounds obvious...
The Shapley value is the only value operator defined on all coalitional games that satisfies some intuitive axioms. (They’re all very natural and don’t include “Equal Impact”.) Proportional allocation isn’t defined on all coalitional games, only a subset of them where you arbitrarily chose some numbers as players’ “contributions”. (A general coalitional game doesn’t come with that structure—it’s just a set of 2^N numbers that specifies the payoff for each possible cooperating clique.) After this arbitrary step, proportional allocation does seem to satisfy the same natural axioms that the Shapley value does. But you can’t expand it to all coalitional games coherently, because otherwise the intuitive axioms would force your “contribution” values to become Shapley values.
In general, I see no reason to use proportional allocation over the Shapley value. If each player suffers a loss of utility proportional to their individual contribution, or any other side effect with an arbitrary cost function, just include it in the SV calculation.
Ok, I think I can articulate a reason for my doubt about the Shapley Value. One nice property for a fair division method to have is that the players can’t game the system by transferring their underlying contributions to one another. That is, Alice and Eve shouldn’t be able to increase their total negentropy allocation (at Bob’s expense) by transferring matter from one to the other ahead of time. Proportional allocation satisfies this property, but Shapley Value doesn’t (unless it happens to coincide with proportional allocation).
If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it’s straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.
This still isn’t very satisfying. John Nash would have advised us to model the situation with free transfer as a game within some larger class of games and apply some general concept like the Shapley value to make the answer pop out. But I’m not yet sure how to do that.
One difficulty with proportional allocation is deciding how to measure contributions. Do you divide proportionately to mass contributed, or do you divide proportionately to negentropy contributed?
In Shapley value, a coalition of Alice and Eve against Bob is given equal weight with the other two possible two-against-one coalitions. Yes, Shapley does permit ‘gaming; in ways that proportional allocation does not, but it treats all possible ‘gamings’ (or coalition structures) equally.
According to Moulin, there are several different sets of axioms that can be used to uniquely derive the Shapley Value, and Equal Impact is among them (it can be used to derive Shapley Value by itself, if I understand correctly).
The problem with all of those sets of axioms is that each set seems to include at least one axiom that isn’t completely intuitive. For example, using the terminology in the Wikipedia article, we can use Symmetry, Additivity and Null Player, and while Symmetry and Null Player seem perfectly reasonable, I’m not so sure about Additivity.
Neat. So if utility is linear or quadratic, the Shapley value is proportional.