The Ultimatum Game is a simple game in which two players attempts to split a $100 reward. They can communicate with each other for 10 minutes, after which:
Player 1 proposes an integer split (e.g. $75 for Player 1, $25 for Player 2)
Player 2 may Accept or Reject this split. If Player 2 rejects, both players receive nothing. Otherwise, the money is distributed according to the agreed-upon split.
At first glance, the mathematical analysis is simple: Player 2 should always accept (since anything is better than nothing), so Player 1 should offer a 99-to-1 split to maximize their winnings.
Much of the commentary around this game revolves around the fact that when you play this game with humans, Player 2′s sense of “fairness” will cause them to “irrationally” reject sufficiently imbalanced splits.
But this post isn’t about people’s feelings. It’s about rational agents attempting to maximize wealth. (I don’t doubt that all these ideas have been discussed before, though in most LW posts I found with this game in it, the game itself is not discussed for more than a paragraph or two).
A Veto is a Powerful Bargaining Chip
If you’re Player 2 and you want to walk away with more than $1, what do you do?
It’s pretty simple, actually—all you need to do is to immediately communicate to Player 1 that you’ve sworn an Unbreakable Vow that you will reject anything other than a 99-1 split in your favor. (Or, more practically, give Player 1 a cryptographic proof of a cryptographic contract that destroys $1000 if you accept anything other than 99-1.) And just like that, the tables are turned. Player 1 now gets to decide between walking away with $1 or walking away with nothing.
This style of play involves reducing your options and committing to throwing away money in a wide variety of scenarios. But against a Player 1 who’s as naive as the original analysis’s Player 2, it works. It’s the madman theory of geopolitics—sometimes the best move is to declare yourself crazy.
Examples
This game corresponds fairly directly to the idea of economic surplus: in a positive-sum transaction, both sides want the transaction to go through, but there remains the lingering question of how to split the surplus.
I unfortunately read the news a lot, so I see a lot of big companies and governments getting into fights with this shape.
Apple vs Epic Games: Both sides make more money with Fortnite on the App Store. But Apple wants a large percentage of the revenue, and Epic chose to reject their demands.
Australia vs Google: Google makes tons of money in Australia, and Australia would prefer not to have Google leave, but that may be what happens if Australia doesn’t drop its proposed law that would take significant revenue away from Google.
California vs Uber and Lyft: Another story of brinkmanship, where Uber/Lyft would rather get 0 revenue from California than accept the California legislature’s rules for paying drivers more.
If the back-and-forth-bluffing in this game reminds you of a government shutdown—well, it reminds me of that, too. The majority party plays Player 1, the minority party with a filibuster plays Player 2.
Every time you swipe your credit card, the merchant pays the credit company around 3% of the purchase. Visa and Mastercard set that fee to a level where nearly all merchants will agree to accept their cards. American Express fees are higher, which is why their cards are accepted in fewer places.
Unions and companies—companies can set wages as they wish, but unions have a “reject” option, namely going on strike.
In everyday life, pretty much anything involving haggling (e.g. buying a house) will resemble this game.
Comparison to Prisoner’s Dilemma
Let’s imagine two types of mindsets: Compromiser and Hardliner. The Compromiser will accept the “I get $1, you get $99” deal, “unfair” though it is. The Hardliner will never accept or propose anything but “I get $99, you get $1”. If two hardliners play, they both get $0; if two compromisers play, they each get $50.
You can now make the standard 2x2 box in your head, and notice that there are 2 equilibria—CH and HC. Compared to Prisoner’s Dilemma, this game is really easy − 3 of the 4 boxes will end up maximizing total surplus, and the one that doesn’t is not a stable equilibrium.
In terms of “moral takeaways”, Prisoner’s Dilemma has a vibe of, “if you have two people who can keep their promises, they’ll do well for themselves in the world.” This game’s takeaway is a bit more complicated: “Take a hard line and stand up for yourself, otherwise the world will pass you by. But don’t go too far beyond what’s fair.”
Adding iterations to this game is interesting:
“Extended haggling”: If Player 2 rejects the offer, the pot shrinks by $2 to $98, but Player 1 can now make a new offer. If you run standard game theory on this, using backwards induction, you get that “optimal play” is for Player 1 to offer a 50-50 split in the first round, and for Player 2 to accept. (When the pot is $2, it’ll be a $1-$1 split; knowing this, Player 1 should offer a $2-$2 split when the pot is $4, knowing that if they offered $3-$1 that Player 2 has no incentive to accept. And so on up to $100.)
This is an emotionally pleasing result, but note that “take it or leave it” strategies still apply just as well here, where Player 2 threatens to reject all future offers, or Player 1 threatens to make all future offers have $0 for Player 2.
“Chicken”: If Player 2 rejects the offer, both players lose $1, but the pot stays at $100. If one player knows they have more stomach for losses than the other player, then driving a hard bargain is the right strategy. But if both think this, they can manage to end up losing money, quite unnecessarily.
“Representative democracy”: The game is played by two elected officials representing two tribes. After each game, both sides hold an election. If a politician ever agrees to a 99-1 split, they know for sure they’ll lose their election to someone who promises they can bring home 20 or 30. As time goes on, office-seekers start to promise 60 or 70 in order to win, and then by the laws of math, one side only brings home 50, a “betrayal”. Both sides elect hardliners, no deal is made, the people suffer. Eventually, the sentiment turns against the hardliners (“are you better off now than you were a few games ago?”) and bipartisan compromise is restored.
I wonder how this system behaves under simple models of the electorate (how much they weight promises vs results).
It’s interesting that this feels roughly equivalent to the original game (it’s basically a game between the two tribes), but the dynamics seem different.
From a total-surplus perspective, the relative “easiness” of this game is encouraging—it’s a good thing that these dynamics, not those of prisoner’s dilemma, are the ones that govern every supply chain, joint venture, and partnership agreement.
One last note: There’s a fun way to combine this and prisoner’s dilemma: namely, having there be 2 people who make an offer (but still 1 person who accepts/rejects).
The game theory now tells you that both people making the offer should offer a $1-$99 split, otherwise the other person will undercut them. If we imagine the offer-makers as companies and the offer-taker as the consumer, we’ve gone from total monopoly to total competition. In total competition, the companies capture almost none of the value they generate, and it almost all goes to the consumer—the miracle of capitalism.
In this scenario, the two companies should want to collude with each other or merge with each other. The Prisoner’s Dilemma situation makes the former difficult, and antitrust law interferes with the latter.
________
Here’s a claim to close this piece: People overuse Prisoner’s Dilemma as a mental model, when they should be using something more along these lines.
This game comes from economics, not the criminal justice system
It’s parameter-free (whereas PD has an oft-overlooked parameter: the ratio of how much extra you get from DD over CD, versus how much extra you get from CC over DD). Probably related: You never have to draw a payout grid for this one, because it’s more intuitive.
There’s a continuum of strategies (“insist on 60-40” vs “insist on 99-1″), providing nice mathematical ways of seeing gradual societal change evolve over time.
Strategies in thee “extended haggling” version are even more interesting: How long do you hold out before caving? Do you gradually relax your demands?
It’s interesting both with and without repetitions—and the repetitions make it more likely for the “no deal” outcome to show up as a negotiating tactic. Contrast with PD where repetition is a key mechanism for salvaging a fairly hopeless situation.
The almost-but-not-quite symmetry of it is a bit awkward—I wonder if the “extended haggling” version above basically resolves that issue. (The game theoretic prediction of the original case does come true sometimes—witness the $2/share Bear Stearns deal—and it’s usually because there is no more time for negotiation.)
The Ultimatum Game
l
The Ultimatum Game is a simple game in which two players attempts to split a $100 reward. They can communicate with each other for 10 minutes, after which:
Player 1 proposes an integer split (e.g. $75 for Player 1, $25 for Player 2)
Player 2 may Accept or Reject this split. If Player 2 rejects, both players receive nothing. Otherwise, the money is distributed according to the agreed-upon split.
At first glance, the mathematical analysis is simple: Player 2 should always accept (since anything is better than nothing), so Player 1 should offer a 99-to-1 split to maximize their winnings.
Much of the commentary around this game revolves around the fact that when you play this game with humans, Player 2′s sense of “fairness” will cause them to “irrationally” reject sufficiently imbalanced splits.
But this post isn’t about people’s feelings. It’s about rational agents attempting to maximize wealth. (I don’t doubt that all these ideas have been discussed before, though in most LW posts I found with this game in it, the game itself is not discussed for more than a paragraph or two).
A Veto is a Powerful Bargaining Chip
If you’re Player 2 and you want to walk away with more than $1, what do you do?
It’s pretty simple, actually—all you need to do is to immediately communicate to Player 1 that you’ve sworn an Unbreakable Vow that you will reject anything other than a 99-1 split in your favor. (Or, more practically, give Player 1 a cryptographic proof of a cryptographic contract that destroys $1000 if you accept anything other than 99-1.) And just like that, the tables are turned. Player 1 now gets to decide between walking away with $1 or walking away with nothing.
This style of play involves reducing your options and committing to throwing away money in a wide variety of scenarios. But against a Player 1 who’s as naive as the original analysis’s Player 2, it works. It’s the madman theory of geopolitics—sometimes the best move is to declare yourself crazy.
Examples
This game corresponds fairly directly to the idea of economic surplus: in a positive-sum transaction, both sides want the transaction to go through, but there remains the lingering question of how to split the surplus.
I unfortunately read the news a lot, so I see a lot of big companies and governments getting into fights with this shape.
Apple vs Epic Games: Both sides make more money with Fortnite on the App Store. But Apple wants a large percentage of the revenue, and Epic chose to reject their demands.
Australia vs Google: Google makes tons of money in Australia, and Australia would prefer not to have Google leave, but that may be what happens if Australia doesn’t drop its proposed law that would take significant revenue away from Google.
California vs Uber and Lyft: Another story of brinkmanship, where Uber/Lyft would rather get 0 revenue from California than accept the California legislature’s rules for paying drivers more.
If the back-and-forth-bluffing in this game reminds you of a government shutdown—well, it reminds me of that, too. The majority party plays Player 1, the minority party with a filibuster plays Player 2.
Every time you swipe your credit card, the merchant pays the credit company around 3% of the purchase. Visa and Mastercard set that fee to a level where nearly all merchants will agree to accept their cards. American Express fees are higher, which is why their cards are accepted in fewer places.
Unions and companies—companies can set wages as they wish, but unions have a “reject” option, namely going on strike.
In everyday life, pretty much anything involving haggling (e.g. buying a house) will resemble this game.
Comparison to Prisoner’s Dilemma
Let’s imagine two types of mindsets: Compromiser and Hardliner. The Compromiser will accept the “I get $1, you get $99” deal, “unfair” though it is. The Hardliner will never accept or propose anything but “I get $99, you get $1”. If two hardliners play, they both get $0; if two compromisers play, they each get $50.
You can now make the standard 2x2 box in your head, and notice that there are 2 equilibria—CH and HC. Compared to Prisoner’s Dilemma, this game is really easy − 3 of the 4 boxes will end up maximizing total surplus, and the one that doesn’t is not a stable equilibrium.
In terms of “moral takeaways”, Prisoner’s Dilemma has a vibe of, “if you have two people who can keep their promises, they’ll do well for themselves in the world.” This game’s takeaway is a bit more complicated: “Take a hard line and stand up for yourself, otherwise the world will pass you by. But don’t go too far beyond what’s fair.”
Adding iterations to this game is interesting:
“Extended haggling”: If Player 2 rejects the offer, the pot shrinks by $2 to $98, but Player 1 can now make a new offer. If you run standard game theory on this, using backwards induction, you get that “optimal play” is for Player 1 to offer a 50-50 split in the first round, and for Player 2 to accept. (When the pot is $2, it’ll be a $1-$1 split; knowing this, Player 1 should offer a $2-$2 split when the pot is $4, knowing that if they offered $3-$1 that Player 2 has no incentive to accept. And so on up to $100.)
This is an emotionally pleasing result, but note that “take it or leave it” strategies still apply just as well here, where Player 2 threatens to reject all future offers, or Player 1 threatens to make all future offers have $0 for Player 2.
“Chicken”: If Player 2 rejects the offer, both players lose $1, but the pot stays at $100. If one player knows they have more stomach for losses than the other player, then driving a hard bargain is the right strategy. But if both think this, they can manage to end up losing money, quite unnecessarily.
“Representative democracy”: The game is played by two elected officials representing two tribes. After each game, both sides hold an election. If a politician ever agrees to a 99-1 split, they know for sure they’ll lose their election to someone who promises they can bring home 20 or 30. As time goes on, office-seekers start to promise 60 or 70 in order to win, and then by the laws of math, one side only brings home 50, a “betrayal”. Both sides elect hardliners, no deal is made, the people suffer. Eventually, the sentiment turns against the hardliners (“are you better off now than you were a few games ago?”) and bipartisan compromise is restored.
I wonder how this system behaves under simple models of the electorate (how much they weight promises vs results).
It’s interesting that this feels roughly equivalent to the original game (it’s basically a game between the two tribes), but the dynamics seem different.
From a total-surplus perspective, the relative “easiness” of this game is encouraging—it’s a good thing that these dynamics, not those of prisoner’s dilemma, are the ones that govern every supply chain, joint venture, and partnership agreement.
One last note: There’s a fun way to combine this and prisoner’s dilemma: namely, having there be 2 people who make an offer (but still 1 person who accepts/rejects).
The game theory now tells you that both people making the offer should offer a $1-$99 split, otherwise the other person will undercut them. If we imagine the offer-makers as companies and the offer-taker as the consumer, we’ve gone from total monopoly to total competition. In total competition, the companies capture almost none of the value they generate, and it almost all goes to the consumer—the miracle of capitalism.
In this scenario, the two companies should want to collude with each other or merge with each other. The Prisoner’s Dilemma situation makes the former difficult, and antitrust law interferes with the latter.
________
Here’s a claim to close this piece: People overuse Prisoner’s Dilemma as a mental model, when they should be using something more along these lines.
This game comes from economics, not the criminal justice system
It’s parameter-free (whereas PD has an oft-overlooked parameter: the ratio of how much extra you get from DD over CD, versus how much extra you get from CC over DD). Probably related: You never have to draw a payout grid for this one, because it’s more intuitive.
There’s a continuum of strategies (“insist on 60-40” vs “insist on 99-1″), providing nice mathematical ways of seeing gradual societal change evolve over time.
Strategies in thee “extended haggling” version are even more interesting: How long do you hold out before caving? Do you gradually relax your demands?
It’s interesting both with and without repetitions—and the repetitions make it more likely for the “no deal” outcome to show up as a negotiating tactic. Contrast with PD where repetition is a key mechanism for salvaging a fairly hopeless situation.
The almost-but-not-quite symmetry of it is a bit awkward—I wonder if the “extended haggling” version above basically resolves that issue. (The game theoretic prediction of the original case does come true sometimes—witness the $2/share Bear Stearns deal—and it’s usually because there is no more time for negotiation.)