I really wish you’d included the outside-of-game considerations. The example of what to eat for dinner is OVERWHELMINGLY about the future relationship between the diners, not about the result itself. This is true of all real-world bargaining (where you’re making commitments and compromises) - you’re giving up some immediate value in order to make future interactions way better.
Agreed. The bargaining solution for the entire game can be very different from adding up the bargaining solutions for the subgames. If there’s a subgame where Alice cares very much about victory in that subgame (interior decorating choices) and Bob doesn’t care much, and another subgame where Bob cares very much about it (food choice) and Alice doesn’t care much, then the bargaining solution of the entire relationship game will end up being something like “Alice and Bob get some relative weights on how important their preferences are, and in all the subgames, the weighted sum of their utilities is maximized. Thus, Alice will be given Alice-favoring outcomes in the subgames where she cares the most about winning, and Bob will be given Bob-favoring outcomes in the subgames where he cares the most about winning”
And in particular, since it’s a sequential game, Alice can notice if Bob isn’t being fair, and enforce the bargaining solution by going “if you’re not aiming for something sorta like this, I’ll break off the relationship”. So, from Bob’s point of view, aiming for any outcome that’s too Bob-favoring has really low utility since Alice will inevitably catch on. (this is the time-extended version of “give up on achieving any outcome that drives the opponent below their BATNA”) Basically, in terms of raw utility, it’s still a bargaining game deep down, but once both sides take into account how the other will react, the payoff matrix for the restaurant game (taking the future interactions into account) will look like “it’s a really bad idea to aim for an outcome the other party would regard as unfair”
Maybe a side note to not forget outside-of-game considerations? But I’m perfectly fine reading about 4⁄3 pi r^3 without “don’t forget that actually things have densities that are never uniform and probably hard to measure and also gravity differs in different locations and in fact you almost certainly have an ellipsoid or something even more complicated instead”, and definitely prefer a world that can present it simply without having to take into account everything in the real world you’d actually have to account for when using the formula in a broader context.
Ok, downvoted for that enough that I should just shut up. But I learn slowly.
These aren’t outside considerations. Future interactions (or, I guess, highly-suspicious superrational shared-causality) are the primary driver for any non-Nash outcome. Use of these examples is more misleading than the canonical frictionless uniform spherical elephant, and even for that, every book or professor is VERY clear about the limitations of the simple equation.
I’m a huge fan of the research and exploration of this kind of game theory. But without really understanding the VERY limiting assumptions behind it, it’s going to be very misleading.
A better example might be literally paying for something while in a marketplace you’re not going to visit again. You don’t have much cash, you do have barter items. Barter what you’ve got, compensate for the difference. Cooperative is “yes a trade is good”, competitive is “but where on the possibility list of acceptable barters will we land”?
I guess the difficulty is that the example really does want to say “all games can be decomposed like this if they’re denominated, not just games that sound kind of like cash”, but any game without significant reputational/relationship effects is gonna sound kind of like cash.
I now agree with you. Or possibly with a steelmanned you, who can say. ;)
This is why I was stressing that “chaa” and “fair” are very different concepts, and that this equilibrium notion is very much based on threats. They just need to be asymmetric threats that the opponent can’t defuse in order to work (or ways of asymmetrically benefiting yourself that your opponent can’t ruin, that’ll work just as well).
in physical reality, payoffs outside of negotiations can depend very much on the players’ behavior inside the negotiations, and thus is not a constant. Nash himself wrote about this limitation (Nash, 1953) just three years after originally proposing the Nash bargaining solution. For instance, if someone makes an unacceptable threat against you during a business negotiation
I’m not really concerned about saying “but reputation matters; the solution you land on here affects your reputation later” since that should be baked into the payoffs.
But I do think it’s important to note the assumption that what happens during negotiation can affect the payoffs even of the current game which this analysis otherwise treats as constant.
I really wish you’d included the outside-of-game considerations. The example of what to eat for dinner is OVERWHELMINGLY about the future relationship between the diners, not about the result itself. This is true of all real-world bargaining (where you’re making commitments and compromises) - you’re giving up some immediate value in order to make future interactions way better.
Agreed. The bargaining solution for the entire game can be very different from adding up the bargaining solutions for the subgames. If there’s a subgame where Alice cares very much about victory in that subgame (interior decorating choices) and Bob doesn’t care much, and another subgame where Bob cares very much about it (food choice) and Alice doesn’t care much, then the bargaining solution of the entire relationship game will end up being something like “Alice and Bob get some relative weights on how important their preferences are, and in all the subgames, the weighted sum of their utilities is maximized. Thus, Alice will be given Alice-favoring outcomes in the subgames where she cares the most about winning, and Bob will be given Bob-favoring outcomes in the subgames where he cares the most about winning”
And in particular, since it’s a sequential game, Alice can notice if Bob isn’t being fair, and enforce the bargaining solution by going “if you’re not aiming for something sorta like this, I’ll break off the relationship”. So, from Bob’s point of view, aiming for any outcome that’s too Bob-favoring has really low utility since Alice will inevitably catch on. (this is the time-extended version of “give up on achieving any outcome that drives the opponent below their BATNA”) Basically, in terms of raw utility, it’s still a bargaining game deep down, but once both sides take into account how the other will react, the payoff matrix for the restaurant game (taking the future interactions into account) will look like “it’s a really bad idea to aim for an outcome the other party would regard as unfair”
Maybe a side note to not forget outside-of-game considerations? But I’m perfectly fine reading about 4⁄3 pi r^3 without “don’t forget that actually things have densities that are never uniform and probably hard to measure and also gravity differs in different locations and in fact you almost certainly have an ellipsoid or something even more complicated instead”, and definitely prefer a world that can present it simply without having to take into account everything in the real world you’d actually have to account for when using the formula in a broader context.
Ok, downvoted for that enough that I should just shut up. But I learn slowly.
These aren’t outside considerations. Future interactions (or, I guess, highly-suspicious superrational shared-causality) are the primary driver for any non-Nash outcome. Use of these examples is more misleading than the canonical frictionless uniform spherical elephant, and even for that, every book or professor is VERY clear about the limitations of the simple equation.
I’m a huge fan of the research and exploration of this kind of game theory. But without really understanding the VERY limiting assumptions behind it, it’s going to be very misleading.
A better example might be literally paying for something while in a marketplace you’re not going to visit again. You don’t have much cash, you do have barter items. Barter what you’ve got, compensate for the difference. Cooperative is “yes a trade is good”, competitive is “but where on the possibility list of acceptable barters will we land”?
I guess the difficulty is that the example really does want to say “all games can be decomposed like this if they’re denominated, not just games that sound kind of like cash”, but any game without significant reputational/relationship effects is gonna sound kind of like cash.
I now agree with you. Or possibly with a steelmanned you, who can say. ;)
(from the next post in this sequence https://www.lesswrong.com/posts/RZNmNwc9SxdKayeQh/unifying-bargaining-notions-2-2)
and
(from Critch’s first boundary post https://www.lesswrong.com/posts/8oMF8Lv5jiGaQSFvo/boundaries-part-1-a-key-missing-concept-from-utility-theory)
I’m not really concerned about saying “but reputation matters; the solution you land on here affects your reputation later” since that should be baked into the payoffs.
But I do think it’s important to note the assumption that what happens during negotiation can affect the payoffs even of the current game which this analysis otherwise treats as constant.