I didn’t know about KS bargaining before reading this, thinking through it now…
It seems kind of odd that terrible solutions like (1000, −10^100) could determine the outcome (I realize they can’t be the outcome, but still). I would hesitate to use KS bargaining unless I felt that Besti(F) values were in some sense ‘reasonable’ outcomes. Do you have a general sense of what a life of maximizing your spouse’s utility would look like (and vice versa)?
Trying to imagine this myself wrt my own partner, figuring out my utility function is a little tricky. The issue is that I think I have some concern for fairness baked in. Like, do I want my partner to do 100% of chores? My reaction is to say ‘no, that would be unfair, I don’t want to be unfair’. But if you’re referencing your utility function in a bargaining procedure to decide what ‘fair’ is, I don’t think that works. So, would I want my partner to do 100% of chores if that were fair? I can simulate that by imagining she offered to do this temporarily as part of a trade or bet and asking myself if I’d consider that a better deal than, say, her doing 75% of chores. And yes, yes I would. But I’d consider ‘she does 100% of chores no matter what, I’m not allowed to help’ a worse deal than ‘she does 100% of chores unless it becomes too costly to her’ for some definitions of ‘too costly’.
Assuming that my utility function is like that about most things, and that hers is as well, I’d say our Besti(F) values are actually reasonable counterfactuals to consider. Which inclines me to think yours are as well.
Still, ‘everything I do’ is a big solution space to make assumptions about. The Vow of Concord pretty much requires you to look for edge cases where your spouse’s utility can be increased by disproportionate sacrifices of yours; I’d suggest you start looking now (if you haven’t yet), before you’ve Vowed to let them guide your decisions.
It seems kind of odd that terrible solutions like (1000, −10^100) could determine the outcome (I realize they can’t be the outcome, but still).
I think you might be misunderstanding how KS works. The “best” values in KS are those that result when you optimize one player’s payoff under the constraint that the second player’s payoff is higher than the disagreement payoff. So, you completely ignore outcomes where one of us would be worse off in expectation than if we didn’t marry.
The “best” values in KS are those that result when you optimize one player’s payoff under the constraint that the second player’s payoff is higher than the disagreement payoff.
I’m not sure this is the case? Wiki does say “It is assumed that the problem is nontrivial, i.e, the agreements in [the feasible set] are better for both parties than the disagreement”, but this is ambiguous as to whether they mean some or all.
Googling further, I see graphs like this where non-Pareto-improvement solutions visibly do count.
I agree that your version seems more reasonable, but I think you lose monotonicity over the set of all policies, because a weak improvement to player 1′s payoffs could turn a (-1, 1000) point into a (0.1, 1000) point, make it able to affect the solution, and make the solution for player 1 worse. Though you’ll still have monotonicity over the restricted set of policies.
In the original paper they have “Assumption 4” which clearly states they disregard solutions that don’t dominate the disagreement point. But, you have a good point that when those solutions are taken into account, you don’t really have monotonicity.
First of all, this is awesome.
I didn’t know about KS bargaining before reading this, thinking through it now…
It seems kind of odd that terrible solutions like (1000, −10^100) could determine the outcome (I realize they can’t be the outcome, but still). I would hesitate to use KS bargaining unless I felt that Besti(F) values were in some sense ‘reasonable’ outcomes. Do you have a general sense of what a life of maximizing your spouse’s utility would look like (and vice versa)?
Trying to imagine this myself wrt my own partner, figuring out my utility function is a little tricky. The issue is that I think I have some concern for fairness baked in. Like, do I want my partner to do 100% of chores? My reaction is to say ‘no, that would be unfair, I don’t want to be unfair’. But if you’re referencing your utility function in a bargaining procedure to decide what ‘fair’ is, I don’t think that works. So, would I want my partner to do 100% of chores if that were fair? I can simulate that by imagining she offered to do this temporarily as part of a trade or bet and asking myself if I’d consider that a better deal than, say, her doing 75% of chores. And yes, yes I would. But I’d consider ‘she does 100% of chores no matter what, I’m not allowed to help’ a worse deal than ‘she does 100% of chores unless it becomes too costly to her’ for some definitions of ‘too costly’.
Assuming that my utility function is like that about most things, and that hers is as well, I’d say our Besti(F) values are actually reasonable counterfactuals to consider. Which inclines me to think yours are as well.
Still, ‘everything I do’ is a big solution space to make assumptions about. The Vow of Concord pretty much requires you to look for edge cases where your spouse’s utility can be increased by disproportionate sacrifices of yours; I’d suggest you start looking now (if you haven’t yet), before you’ve Vowed to let them guide your decisions.
Thank you :)
I think you might be misunderstanding how KS works. The “best” values in KS are those that result when you optimize one player’s payoff under the constraint that the second player’s payoff is higher than the disagreement payoff. So, you completely ignore outcomes where one of us would be worse off in expectation than if we didn’t marry.
I’m not sure this is the case? Wiki does say “It is assumed that the problem is nontrivial, i.e, the agreements in [the feasible set] are better for both parties than the disagreement”, but this is ambiguous as to whether they mean some or all. Googling further, I see graphs like this where non-Pareto-improvement solutions visibly do count.
I agree that your version seems more reasonable, but I think you lose monotonicity over the set of all policies, because a weak improvement to player 1′s payoffs could turn a (-1, 1000) point into a (0.1, 1000) point, make it able to affect the solution, and make the solution for player 1 worse. Though you’ll still have monotonicity over the restricted set of policies.
In the original paper they have “Assumption 4” which clearly states they disregard solutions that don’t dominate the disagreement point. But, you have a good point that when those solutions are taken into account, you don’t really have monotonicity.