Cooperative Surplus Splitting
Often we cooperate to extract surplus value from the government, hotels, the physics that makes operating cars cost money, or other sources—value that we could not extract individually. When I notice such a surplus I often wonder how the surplus should be split. What is fair? Purely cooperatively, without anyone trying to game the surplus-allocation-function, and assuming the stated coalitions are fixed rather than negotiable, how much of the surplus should be attributed to each contributing party?
Some concrete examples that have come up recently in real life*:
1. Matching donations. The company I work for will match donations to charity, dollar for dollar, up to a certain maximum. Viscerally, how should I feel about donating $100 to puppies**? More than $100, since puppies get $200, certainly. But less than $200, since my employer should feel puppy-love too, and presumably there’s a conservation of visceral feeling law that should apply here. Further suppose that my employer’s matching offer caused me to donate $100 instead of, say, $50. What math should be done and why?
2. Exemption splitting. An amicable divorce leaves two parents wondering who should claim their student daughter as a dependent. As a purely “what is fair?” financial question, how much of the tax savings from that exemption should be distributed to the father, mother, and daughter? Suppose the father’s marginal tax rate is 25% and overall tax rate is 18%, and the mother’s marginal rate is 15% and overall is 12%. What math should be done and why?
3. Refinancing. My friend has a debt at 12% and for silly reasons is obviously able to pay it off but cannot this year. I can pay it off, though, and so could several other people***. Assume there are 3 people including me who could pay it off, and our current expected returns on invested money are (say) 2%, 3.5%, and 6%, and for simplicity she will repay the loan plus any surplus due in one year. Who should pay off how much of the loan (say it’s $5000)? I assume the 2% person should pay all of it. That’s a 10% surplus—how much do each of the four of us get? What math should be done and why?
As in The Bedrock of Fairness, are there qualities of the solutions we have strong opinions on, even if we do not know the procedure which would generate solutions with those qualities?
*Details changed.
**I do not donate to puppies.
***Assume default risk is negligible.
… presumably there’s a conservation of visceral feeling law that should apply here …
That’s why I sometimes randomly feel happy: because somewhere else in the universe, two people are fighting.
Why do you expect there to be standard math for this? This seems like it’s up to your utility function and the psychology of motivation.
Anyway, Staurt Armstrong summarized a lot of what’s known about bargaining problems. I think the Nash bargaining solution is better than the other ideas he describes, for reasons that are complicated to explain.
As far as I know, there’s no general solution for more than two players yet. We do know that any Pareto optimum must correspond to maximizing a weighted sum of the agents’ utility functions, but that isn’t much help; the whole point of bargaining is to choose which Pareto optimum will be selected, and knowing that there is some weighting that would give the right answer doesn’t tell us which one it is. If you look at the proof of the fact that any Pareto optimum must correspond to a weighted sum of the utility functions, you can see that the solution is, in some sense, more fundamental than the weights, and that trying to reduce the problem of picking a solution to one of picking weights is not a promising angle of attack here.
I will have to think about the “tell me what I mean to you” approach. Reducing problems like this to one or two simple free parameters which have no fair value is useful, too, because it crystallizes the only real decision to be made out of the morass of money questions.
Well, here’s an example that makes some sense to me. If I find another, less complicated situation which “should” provoke the same feelings as this situation, that’s progress—and I can! Assume my utility function is approximately linear in dollars I lose at this scale, dollars my employer loses at this scale, and dollars puppies gain at this scale. Then maybe I can attribute to me my loss of $50 and half of my loss of the next $50, half of my employer’s loss of $100, and puppies’ gain of $50 and half of puppies’ gain of $150, saying I should feel about the overall situation similar to how I’d feel if I solely caused a situation where:
I lost $75
My employer lost $50
Puppies gained $125
That’s an example of math that makes a lot of sense to me, regardless of my utility function except for the fairly reasonable linearity-at-small-scales assumption.
Okay, so “how should I feel” means “what is the utility of this scenario” in this context? In that case, you should use the full values, rather than discounting for things that were partially ‘someone else’s responsibility’. If you prefer world-state A to world-state B, you should act so as to cause A to obtain rather than B. The fact that A gets a tag saying “this wasn’t all me—someone helped with this” doesn’t make the difference in utilities any smaller.
Maybe so! It feels very wrong that both I and my employer should feel like we caused me to lose $100, my employer to lose $100, and puppies to gain $200. I mean, suppose there was a third agent who could vote yea or nay to the transaction and flip 10 coins and if they all land heads, that vote decides whether the transaction takes place. Should that agent, voting yea, also feel like she caused me to lose $100, my employer to lose $100, and puppies to gain $200? If yes, woah, if no, why not, and does the same reason affect how I should feel?
Hmm, this makes it more clear to me what I mean by “how should I feel”. I think what I mean is something like “Suppose my brain was trying to build an intuition that gives me feelings about possible actions based on unseen predictions regarding how those actions affect the world, and lots of other brains were also trying to do this. How should these brains interpret that situation so as to not overtrain or undertrain?”
What I don’t want is a result that says “Enter into a contract with 99 other people that none of you donates $100 to puppies unless all of you do, then donate $100, so you can feel like you caused puppies to get $10000!”. Somehow that seems counterproductive. Except as a one-off game for funsies, which doesn’t count. :)
Actually, this isn’t wrong, as long as you think about it the right way. First you are causing puppies to get $10000, but you are also causing 99 people to lose $100, so you have to account for that.
More importantly, though, frame the scenario this way: 99 people already signed the contract, and you still have to decide whether to sign it. Then, you are clearly making the entire difference and you should be willing to accept correspondingly large disutilities if they are necessary to get the deal to happen (unless they could easily find someone else, in which case both the math and the intuition agree that you are not creating as many utilions). Note that the math cannot require everyone to accept individually large disutilities, because then signing the contract would cause all of those disutilies to occur.
If, however they have not signed anything yet, either you know that they are going to and that they cannot find anyone elso to be the 100th person, in which case this is equivalent to the other scenario, or you don’t know whether they are all going to sign it in which case the utility is reduced by the uncertainty and you should no longer accept as large disutilities to sign the contract.
If we made that assumption then you’d never stop giving to puppies—whatever you gained by giving $100 you’d gain twice over by giving $200. Assuming that both your and your favorite charity have a lot more money than you, then it’s probably okay to assume that they experience changes in marginal utility which are locally linear in dollars, but at some point you’re going to stop giving because your own utility function went noticeably nonlinear, e.g. that second $100 would have been a bigger loss to you than the first was.
Right, that’s why I specified “at this scale”… Oh I see it’s not clear the modifier refers to all three resources. Editing. :)
1) There is no conservation of visceral feeling. Performing math to determine how you feel about something is just as bad as using your feelings to estimate probabilities.
2) None of the tax savings should be directly distributed to the dependent. The tax savings exist because the dependent student is financially dependent on her parents, and would be even if the tax savings did not exist- she is being subsidized already.
As a fairness issue, the exemption should be taken by the person who will get the largest marginal benefit from it, and then distributed proportionally to the expenses incurred in supporting the student. If the numbers work properly, this is done trivially if the total amount of the exemption is applied first to the student’s expenses and the remainder of those expenses is paid according to whatever is deemed fair by all participants. If the participants cannot agree regarding what is fair, the student does not go to school as a dependent and the exemption does not exist.
3) Everyone should pool all of their investment money- $5000 of it earns 12% interest, and the remainder is invested by the person who expects 6% annual returns. A fair amount is deducted as payment for investment services, and the surplus is distributed proportionally to the amounts paid into the mutual fund. If any of the participants do not agree on what a fair fee is, then those people do not participate.
Thanks for actual answers! :)
Sorry that they don’t generalize well. The third one still confuses me- why don’t three people who can cooperate fairly at the same level already share investment advice and have identical returns on investment? Is the person who is getting lower returns more risk-adverse than the other two? If so, why is a loan which has little risk of default made to the fourth at twice the high-risk yield, given that a low risk of default is a premise of the question?
I’ve never been completely clear on where the Shapley value does and does not apply.
The Shapley value is good, but part of it is unintuitive to me. I just don’t understand why, for example in #2, the coalition is divided into the father, mother, and daughter, rather than into “those supplying the exemption” and “those applying the exemption”. Or if someone is fairly stable over time and someone else is changing rapidly, does the first person get to capture way more value because over the day of negotiations they’re one person while the other is three? And this is all still handwaving away the process of choosing which coalitions to form, but that’s a whole new question.
“Those supplying the exemption” would be the people who write the tax laws- what is their fair cut? Are we deliberately ignoring the fact that the law defines exactly who is permitted to take the exemption, and assuming that everyone involved is amicably agreeing to falsify information (if need be) to maximize the tax benefits? e.g. if the student was covering all of her expenses, she does not qualify as a dependent and no one else may claim her as a dependent.
Good point, given what I said I have no real reason to exclude the government from which they’re extracting “surplus”. But I wanna. ;)
(I believe falsifying information is not necessary given changes in who pays, plus the gift exclusion, but that’s beside the point.)
http://lesswrong.com/lw/12v/fair_division_of_blackhole_negentropy_an/ http://lesswrong.com/lw/13y/freaky_fairness/