I previously claimed that most apparent Prisoner’s Dilemmas are actually Stag Hunts. I now claim that they’re Schelling Pub in practice. I conclude with some lessons for fighting Moloch.
This post turned out especially dense with inferential leaps and unexplained terminology. If you’re confused, try to ask in the comments and I’ll try to clarify.
Some ideas here are due to Tsvi Benson-Tilsen.
The title of this post used to be Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes. I’m changing it based on this comment. “Battle of the Sexes” is a game where a male and female (let’s say Bob and Alice) want to hang out, but each of them would prefer to engage in gender-stereotyped behavior. For example, Bob wants to go to a football game, and Alice wants to go to a museum. The gender issues are distracting, and although it’s the standard, the game isn’t that well-known anyway, so sticking to the standard didn’t buy me much (in terms of reader understanding).
I therefore present to you,
the Schelling Pub Game:
Two friends would like to meet at the pub. In order to do so, they must make the same selection of pub (making this a Schelling-point game). However, they have different preferences about which pub to meet at. For example:
Alice and Bob would both like to go to a pub this evening.
There are two pubs: the Xavier, and the Yggdrasil.
Alice likes the Xavier twice as much as the Yggdrasil.
Bob likes the Yggdrasil twice as much as the Xavier.
However, Alice and Bob also prefer to be with each other. Let’s say they like being together ten times as much as they like being apart.
Schelling Pub Game payoff matrix
payoffs written alice;bob
B’s choice
X
Y
A’s choice
X
20;10
2;2
Y
1;1
10;20
The important features of this game are:
The Nash equilibria are all Pareto-optimal. There is no “individually rational agents work against each other” problem, like in prisoner’s dilemma or even stag hunt.
There are multiple equilibria, and different agents prefer different equilibria.
Thus, realistically, agents may not end up in equilibrium at all—because (in the single-shot game) they don’t know which to choose, and because (in an iterated version of the game) they may make locally sub-optimal choices in order to influence the long-run behavior of other players.
(Edited to add, based on comments:)
Here’s a summary of the central argument which, despite the lack of pictures, may be easier to understand.
Most Prisoner’s Dilemmas are actually iterated.
Iterated games are a whole different game with a different action space (because you can react to history), a different payoff matrix (because you care about future payoffs, not just the present), and a different set of equilibria.
It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium. This is not the case with iterated PD.
It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal. This is also the case with iterated PD.So iterated PD resembles Stag Hunt.
However, it is furthermore true of iterated PD that there are multiple different Pareto-optimal equilibria, which benefit different players more or less. Also, if players don’t successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble the Schelling Pub Game.
In fact, the Folk Theorem suggests that most iterated games will resemble the Schelling Pub Game in this way.
In the book The Stag Hunt, Skyrms similarly says that lots of people use Prisoner’s Dilemma to talk about social coordination, and he thinks people should often use Stag Hunt instead.
I think this is right. Most problems which initially seem like Prisoner’s Dilemma are actually Stag Hunt, because there are potential enforcement mechanisms available. The problems discussed in Meditations on Moloch are mostly Stag Hunt problems, not Prisoner’s Dilemma problems—Scott even talks about enforcement, when he describes the dystopia where everyone has to kill anyone who doesn’t enforce the terrible social norms (including the norm of enforcing).
This might initially sound like good news. Defection in Prisoner’s Dilemma is an inevitable conclusion under common decision-theoretic assumptions. Trying to escape multipolar traps with exotic decision theories might seem hopeless. On the other hand, rabbit in Stag Hunt is not an inevitable conclusion, by any means.
Unfortunately, in reality, hunting stag is actually quite difficult. (“The schelling choice is Rabbit, not Stag… and that really sucks!”)
Inspired by Zvi’s recent sequence on Moloch, I wanted to expand on this. These issues are important, since they determine how we think about group action problems / tragedy of the commons / multipolar traps / Moloch / all the other synonyms for the same thing.
My current claim is that most Prisoner’s Dilemmas are actually Schelling pub games. But let’s first review the relevance of Stag Hunt.
Your PD Is Probably a Stag Hunt
There are several reasons why an apparent Prisoner’s Dilemma may be more of a Stag Hunt.
The game is actually an iterated game.
Reputation networks could punish defectors and reward cooperators.
There are enforceable contracts.
Players know quite a bit about how other players think (in the extreme case, players can view each other’s source code).
Each of these formal model creates a situation where players can get into a cooperative equilibrium. The challenge is that you can’t unilaterally decide everyone should be in the cooperative equilibrium. If you want good outcomes for yourself, you have to account for what everyone else probably does. If you think everyone is likely to be in a bad equilibrium where people punish each other for cooperating, then aligning with that equilibrium might be the best you can do! This is like hunting rabbit.
Exercize: is there a situation in your life, or within spitting distance, which seems like a Prisoner’s Dilemma to you, where everyone is stuck hurting each other due to bad incentives? Is it an iterated situation? Could there be reputation networks which weed out bad actors? Could contracts or contract-like mechanisms be used to encourage good behavior?
So, why do we perceive so many situations to be Prisoner’s Dilemma -like rather than Stag Hunt -like? Why does Moloch sound more like each individual is incentivized to make it worse for everyone else than everyone is stuck in a bad equilibrium?
A friend of mine speculated that, in the decades that humanity has lived under the threat of nuclear war, we’ve developed the assumption that we’re living in a world of one-shot Prisoner’s Dilemmas rather than repeated games, and lost some of the social technology associated with repeated games. Game theorists do, of course, know about iterated games and there’s some fascinating research in evolutionary game theory, but the original formalization of game theory was for the application of nuclear war, and the 101-level framing that most educated laymen hear is often that one-shot is the prototypical case and repeated games are hard to reason about without computer simulations.
To use board-game terminology, the game may be a Prisoner’s Dilemma, but the metagame can use enforcement techniques. Accounting for enforcement techniques, the game is more like a Stag Hunt, where defecting is “rabbit” and cooperating is “stag”.
Schelling Pubs
But this is a bit informal. You don’t separately choose how to metagame and how to game; really, your iterated strategy determines what you do in individual games.
So it’s more accurate to just think of the iterated game. There are a bunch of iterated strategies which you can choose from.
The key difference between the single-shot game and the iterated game is that cooperative strategies, such as Tit for Tat (but includingothers), are avaliable. These strategies have the property that (1) they are equilibria—if you know the other player is playing Tit for Tat, there’s no reason for you not to; (2) if both players use them, they end up cooperating.
A key feature of Tit for Tat strategy is that if you do end up playing against a pure defector, you do almost as well as you could possibly do with them. This doesn’t sound very much like a Stag Hunt. It begins to sound like a Stag Hunt in which you can change your mind and go hunt rabbit if the other person doesn’t show up to hunt stag with you.
Sounds great, right? We can just play one of these cooperative strategies.
The problem is, there are many possible self-enforcing equilibria. Each player can threaten the other player with a Grim Trigger strategy: they defect forever the moment some specified condition isn’t met. This can be used to extort the other player for more than just the mutual-cooperation payoff. Here’s an illustration of possible outcomes, with the enforceable frequencies in the white area:
Alice could be extorting Bob by cooperating 2/3rds of the time, with a grim-trigger threat of never cooperating at all. Alice would then get an average payoff of 2⅓, while Bob would get an average payout of 1⅓.
In the artificial setting of Prisoner’s Dilemma, it’s easy to say that Cooperate, Cooperate is the “fair” solution, and an equilibrium like I just described is “Alice exploiting Bob”. However, real games are not so symmetric, and so it will not be so obvious what “fair” is. The purple squiggle highlights the Pareto frontier—the space of outcomes which are “efficient” in the sense that no alternative is purely better for everybody. These outcomes may not all be fair, but they all have the advantage that no “money is left on the table”—any “improvement” we could propose for those outcomes makes things worse for at least one person.
Notice that I’ve also colored areas where Bob and Alice are doing worse than payoff 1. Bob can’t enforce Alice’s cooperation while defecting more than half the time; Alice would just defect. And vice versa. All of the points within the shaded regions have this property. So not all Pareto-optimal solutions can be enforced.
Any point in the white region can be enforced, however. Each player could be watching the statistics of the other player’s cooperation, prepared to pull a grim-trigger if the statistics ever stray too far from the target point. This includes so-called mutual blackmail equilibria, in which both players cooperate with probability slightly better than zero (while threatening to never cooperate at all if the other player detectably diverges from that frequency). This idea—that ‘almost any’ outcome can be enforced—is known as the Folk Theorem in game theory.
The Schelling Pub part is that (particularly with grim-trigger enforcement) everyone has to choose the same equilibrium to enforce; otherwise everyone is stuck playing defect. You’d rather be in even a bad mutual-blackmail type equilibrium, as opposed to selecting incompatible points to enforce. Just like, in Schelling Pub, you’d prefer to meet together at any venue rather than end up at different places.
Furthermore, I would claim that most apparent Stag Hunts which you encounter in real life are actually schelling-pub, in the sense that there are many different stags to hunt and it isn’t immediately clear which one should be hunted. Each stag will be differently appealing to different people, so it’s difficult to establish common knowledge about which one is worth going after together.
Exercize: what stags aren’t you hunting with the people around you?
Taking Pareto Improvements
Fortunately, Grim Trigger is not the only enforcement mechanism which can be used to build an equilibrium. Grim Trigger creates a crisis in which you’ve got to guess which equilibrium you’re in very quickly, to avoid angering the other player; and no experimentation is allowed. There are much more forgiving strategies (and contrite ones, too, which helps in a different way).
Actually, even using Grim Trigger to enforce things, why would you punish the other player for doing something better for you? There’s no motive for punishing the other player for raising their cooperation frequency.
In a scenario where you don’t know which Grim Trigger the other player is using, but you don’t think they’ll punish you for cooperating more than the target, a natural response is for both players to just cooperate a bunch.
So, it can be very valuable to use enforcement mechanisms which allow for Pareto improvements.
Taking Pareto improvements is about moving from the middle to the boundary:
(I’ve indicated the directions for Pareto improvements starting from the origin in yellow, as well as what happens in other directions; also, I drew a bunch of example Pareto improvements as black arrows to illustrate how Pareto improvements are awesome. Some of the black arrows might not be perfectly within the range of Pareto improvements, sorry about that.)
However, there’s also an argument against taking Pareto improvements. If you accept any Pareto improvements, you can be exploited in the sense mentioned earlier—you’ll accept any situation, so long as it’s not worse for you than where you started. So you will take some pretty poor deals. Notice that one Pareto improvement can prevent a different one—for example, if you move to (1/2, 1), then you can’t move to (1,1/2) via Pareto improvement. So you could always reject a Pareto improvement because you’re holding out for a better deal. (This is the Schelling Pub aspect of the situation—there are Pareto-optimal outcomes which are better or worse for different people, so, it’s hard to agree on which improvement to take.)
That’s where Cooperation between Agents with Different Notions of Fairness comes in. The idea in that post is that you don’t take just any Pareto improvement—you have standards of fairness—but you don’t just completely defect for less-than-perfectly-fair deals, either. What this means is that two such agents with incompatible notions of fairness can’t get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get. And, if the notions of fairness are compatible, they can get all the way.
Moloch is the Folk Theorem
Because of the Folk Theorem, most iterated games will have the same properties I’ve been talking about (not just iterated PD). Specifically, most iterated games will have:
Stag-hunt-like property 1: There is a Pareto-optimal equilibrium, but there is also an equilibrium far from Pareto-optimal.
The Schelling Pub property: There are multiple Pareto-optimal equilibria, so that even if you’re trying to cooperate, you don’t necessarily know which one to aim for; and, different options favor different people, making it a complex negotiation even if you can discuss the problem ahead of time.
There’s a third important property which I’ve been assuming, but which doesn’t follow so directly from the Folk Theorem: the suboptimal equilibrium is “safe”, in that you can unilaterally play that way to get some guaranteed utility. The Pareto-optimal equilibria are not similarly safe; mistakenly playing one of them when other people don’t can be worse than the “safe” guarantee from the poor equilibrium.
A game with all three properties is like Stag Hunt with multiple stags (where you all must hunt the same stag to win, but can hunt rabbit alone for a guaranteed mediocre payoff), or Schelling Pub where you can just stay home (you’d rather stay home than go out alone).
Lessons in Slaying Moloch
0. I didn’t even address this in this essay, but it’s worth mentioning: not all conflicts are zero-sum. In the introduction to the 1980 edition of The Strategy of Conflict, Thomas Schelling discusses the reception of the book. He recalls that a prominent political theorist “exclaimed how much this book had done for his thinking, and as he talked with enthusiasm I tried to guess which of my sophisticated ideas in which chapters had made so much difference to him. It turned out it wasn’t any particular idea in any particular chapter. Until he read this book, he had simply not comprehended that an inherently non-zero-sum conflict could exist.”
1. In situations such as iterated games, there’s no in-principle pull toward defection. Prisoner’s Dilemma seems paradoxical when we first learn of it (at least, it seemed so to me) because we are not accustomed to such a harsh divide between individual incentives and the common good. But perhaps, as Sarah Constantine speculated in Don’t Shoot the Messenger, modern game theory and economics have conditioned us to be used to this conflict due to their emphasis on single-shot interactions. As a result, Moloch comes to sound like an inevitable gravity, pulling everything downwards. This is not necessarily the case.
2. Instead, most collective action problems are bargaining problems. If a solution can be agreed upon, we can generally use weak enforcement mechanisms (social norms) or strong enforcement (centralized governmental enforcement) to carry it out. But, agreeing about the solution may not be easy. The more parties involved, the more difficult.
3. Try to keep a path open toward better solutions. Since wide adoption of a particular solution can be such an important problem, there’s a tendency to treat alternative solutions as the enemy. This bars the way to further progress. (One could loosely characterize this as the difference between religious doctrine and democratic law; religious doctrine trades away the ability to improve in favor of the more powerful consensus-reaching technology of immutable universal law. But of course this oversimplifies things somewhat.) Keeping a path open for improvements is hard, partly because it can create exploitability. But it keeps us from getting stuck in a poor equilibrium.
Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems
I previously claimed that most apparent Prisoner’s Dilemmas are actually Stag Hunts. I now claim that they’re Schelling Pub in practice. I conclude with some lessons for fighting Moloch.
This post turned out especially dense with inferential leaps and unexplained terminology. If you’re confused, try to ask in the comments and I’ll try to clarify.
Some ideas here are due to Tsvi Benson-Tilsen.
The title of this post used to be Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes. I’m changing it based on this comment. “Battle of the Sexes” is a game where a male and female (let’s say Bob and Alice) want to hang out, but each of them would prefer to engage in gender-stereotyped behavior. For example, Bob wants to go to a football game, and Alice wants to go to a museum. The gender issues are distracting, and although it’s the standard, the game isn’t that well-known anyway, so sticking to the standard didn’t buy me much (in terms of reader understanding).
I therefore present to you,
the Schelling Pub Game:
Two friends would like to meet at the pub. In order to do so, they must make the same selection of pub (making this a Schelling-point game). However, they have different preferences about which pub to meet at. For example:
Alice and Bob would both like to go to a pub this evening.
There are two pubs: the Xavier, and the Yggdrasil.
Alice likes the Xavier twice as much as the Yggdrasil.
Bob likes the Yggdrasil twice as much as the Xavier.
However, Alice and Bob also prefer to be with each other. Let’s say they like being together ten times as much as they like being apart.
The important features of this game are:
The Nash equilibria are all Pareto-optimal. There is no “individually rational agents work against each other” problem, like in prisoner’s dilemma or even stag hunt.
There are multiple equilibria, and different agents prefer different equilibria.
Thus, realistically, agents may not end up in equilibrium at all—because (in the single-shot game) they don’t know which to choose, and because (in an iterated version of the game) they may make locally sub-optimal choices in order to influence the long-run behavior of other players.
(Edited to add, based on comments:)
Here’s a summary of the central argument which, despite the lack of pictures, may be easier to understand.
Most Prisoner’s Dilemmas are actually iterated.
Iterated games are a whole different game with a different action space (because you can react to history), a different payoff matrix (because you care about future payoffs, not just the present), and a different set of equilibria.
It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium. This is not the case with iterated PD.
It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal. This is also the case with iterated PD. So iterated PD resembles Stag Hunt.
However, it is furthermore true of iterated PD that there are multiple different Pareto-optimal equilibria, which benefit different players more or less. Also, if players don’t successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble the Schelling Pub Game.
In fact, the Folk Theorem suggests that most iterated games will resemble the Schelling Pub Game in this way.
In a comment on The Schelling Choice is “Rabbit”, not “Stag” I said:
Inspired by Zvi’s recent sequence on Moloch, I wanted to expand on this. These issues are important, since they determine how we think about group action problems / tragedy of the commons / multipolar traps / Moloch / all the other synonyms for the same thing.
My current claim is that most Prisoner’s Dilemmas are actually Schelling pub games. But let’s first review the relevance of Stag Hunt.
Your PD Is Probably a Stag Hunt
There are several reasons why an apparent Prisoner’s Dilemma may be more of a Stag Hunt.
The game is actually an iterated game.
Reputation networks could punish defectors and reward cooperators.
There are enforceable contracts.
Players know quite a bit about how other players think (in the extreme case, players can view each other’s source code).
Each of these formal model creates a situation where players can get into a cooperative equilibrium. The challenge is that you can’t unilaterally decide everyone should be in the cooperative equilibrium. If you want good outcomes for yourself, you have to account for what everyone else probably does. If you think everyone is likely to be in a bad equilibrium where people punish each other for cooperating, then aligning with that equilibrium might be the best you can do! This is like hunting rabbit.
Exercize: is there a situation in your life, or within spitting distance, which seems like a Prisoner’s Dilemma to you, where everyone is stuck hurting each other due to bad incentives? Is it an iterated situation? Could there be reputation networks which weed out bad actors? Could contracts or contract-like mechanisms be used to encourage good behavior?
So, why do we perceive so many situations to be Prisoner’s Dilemma -like rather than Stag Hunt -like? Why does Moloch sound more like each individual is incentivized to make it worse for everyone else than everyone is stuck in a bad equilibrium?
Sarah Constantine writes:
To use board-game terminology, the game may be a Prisoner’s Dilemma, but the metagame can use enforcement techniques. Accounting for enforcement techniques, the game is more like a Stag Hunt, where defecting is “rabbit” and cooperating is “stag”.
Schelling Pubs
But this is a bit informal. You don’t separately choose how to metagame and how to game; really, your iterated strategy determines what you do in individual games.
So it’s more accurate to just think of the iterated game. There are a bunch of iterated strategies which you can choose from.
The key difference between the single-shot game and the iterated game is that cooperative strategies, such as Tit for Tat (but including others), are avaliable. These strategies have the property that (1) they are equilibria—if you know the other player is playing Tit for Tat, there’s no reason for you not to; (2) if both players use them, they end up cooperating.
A key feature of Tit for Tat strategy is that if you do end up playing against a pure defector, you do almost as well as you could possibly do with them. This doesn’t sound very much like a Stag Hunt. It begins to sound like a Stag Hunt in which you can change your mind and go hunt rabbit if the other person doesn’t show up to hunt stag with you.
Sounds great, right? We can just play one of these cooperative strategies.
The problem is, there are many possible self-enforcing equilibria. Each player can threaten the other player with a Grim Trigger strategy: they defect forever the moment some specified condition isn’t met. This can be used to extort the other player for more than just the mutual-cooperation payoff. Here’s an illustration of possible outcomes, with the enforceable frequencies in the white area:
Alice could be extorting Bob by cooperating 2/3rds of the time, with a grim-trigger threat of never cooperating at all. Alice would then get an average payoff of 2⅓, while Bob would get an average payout of 1⅓.
In the artificial setting of Prisoner’s Dilemma, it’s easy to say that Cooperate, Cooperate is the “fair” solution, and an equilibrium like I just described is “Alice exploiting Bob”. However, real games are not so symmetric, and so it will not be so obvious what “fair” is. The purple squiggle highlights the Pareto frontier—the space of outcomes which are “efficient” in the sense that no alternative is purely better for everybody. These outcomes may not all be fair, but they all have the advantage that no “money is left on the table”—any “improvement” we could propose for those outcomes makes things worse for at least one person.
Notice that I’ve also colored areas where Bob and Alice are doing worse than payoff 1. Bob can’t enforce Alice’s cooperation while defecting more than half the time; Alice would just defect. And vice versa. All of the points within the shaded regions have this property. So not all Pareto-optimal solutions can be enforced.
Any point in the white region can be enforced, however. Each player could be watching the statistics of the other player’s cooperation, prepared to pull a grim-trigger if the statistics ever stray too far from the target point. This includes so-called mutual blackmail equilibria, in which both players cooperate with probability slightly better than zero (while threatening to never cooperate at all if the other player detectably diverges from that frequency). This idea—that ‘almost any’ outcome can be enforced—is known as the Folk Theorem in game theory.
The Schelling Pub part is that (particularly with grim-trigger enforcement) everyone has to choose the same equilibrium to enforce; otherwise everyone is stuck playing defect. You’d rather be in even a bad mutual-blackmail type equilibrium, as opposed to selecting incompatible points to enforce. Just like, in Schelling Pub, you’d prefer to meet together at any venue rather than end up at different places.
Furthermore, I would claim that most apparent Stag Hunts which you encounter in real life are actually schelling-pub, in the sense that there are many different stags to hunt and it isn’t immediately clear which one should be hunted. Each stag will be differently appealing to different people, so it’s difficult to establish common knowledge about which one is worth going after together.
Exercize: what stags aren’t you hunting with the people around you?
Taking Pareto Improvements
Fortunately, Grim Trigger is not the only enforcement mechanism which can be used to build an equilibrium. Grim Trigger creates a crisis in which you’ve got to guess which equilibrium you’re in very quickly, to avoid angering the other player; and no experimentation is allowed. There are much more forgiving strategies (and contrite ones, too, which helps in a different way).
Actually, even using Grim Trigger to enforce things, why would you punish the other player for doing something better for you? There’s no motive for punishing the other player for raising their cooperation frequency.
In a scenario where you don’t know which Grim Trigger the other player is using, but you don’t think they’ll punish you for cooperating more than the target, a natural response is for both players to just cooperate a bunch.
So, it can be very valuable to use enforcement mechanisms which allow for Pareto improvements.
Taking Pareto improvements is about moving from the middle to the boundary:
(I’ve indicated the directions for Pareto improvements starting from the origin in yellow, as well as what happens in other directions; also, I drew a bunch of example Pareto improvements as black arrows to illustrate how Pareto improvements are awesome. Some of the black arrows might not be perfectly within the range of Pareto improvements, sorry about that.)
However, there’s also an argument against taking Pareto improvements. If you accept any Pareto improvements, you can be exploited in the sense mentioned earlier—you’ll accept any situation, so long as it’s not worse for you than where you started. So you will take some pretty poor deals. Notice that one Pareto improvement can prevent a different one—for example, if you move to (1/2, 1), then you can’t move to (1,1/2) via Pareto improvement. So you could always reject a Pareto improvement because you’re holding out for a better deal. (This is the Schelling Pub aspect of the situation—there are Pareto-optimal outcomes which are better or worse for different people, so, it’s hard to agree on which improvement to take.)
That’s where Cooperation between Agents with Different Notions of Fairness comes in. The idea in that post is that you don’t take just any Pareto improvement—you have standards of fairness—but you don’t just completely defect for less-than-perfectly-fair deals, either. What this means is that two such agents with incompatible notions of fairness can’t get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get. And, if the notions of fairness are compatible, they can get all the way.
Moloch is the Folk Theorem
Because of the Folk Theorem, most iterated games will have the same properties I’ve been talking about (not just iterated PD). Specifically, most iterated games will have:
Stag-hunt-like property 1: There is a Pareto-optimal equilibrium, but there is also an equilibrium far from Pareto-optimal.
The Schelling Pub property: There are multiple Pareto-optimal equilibria, so that even if you’re trying to cooperate, you don’t necessarily know which one to aim for; and, different options favor different people, making it a complex negotiation even if you can discuss the problem ahead of time.
There’s a third important property which I’ve been assuming, but which doesn’t follow so directly from the Folk Theorem: the suboptimal equilibrium is “safe”, in that you can unilaterally play that way to get some guaranteed utility. The Pareto-optimal equilibria are not similarly safe; mistakenly playing one of them when other people don’t can be worse than the “safe” guarantee from the poor equilibrium.
A game with all three properties is like Stag Hunt with multiple stags (where you all must hunt the same stag to win, but can hunt rabbit alone for a guaranteed mediocre payoff), or Schelling Pub where you can just stay home (you’d rather stay home than go out alone).
Lessons in Slaying Moloch
0. I didn’t even address this in this essay, but it’s worth mentioning: not all conflicts are zero-sum. In the introduction to the 1980 edition of The Strategy of Conflict, Thomas Schelling discusses the reception of the book. He recalls that a prominent political theorist “exclaimed how much this book had done for his thinking, and as he talked with enthusiasm I tried to guess which of my sophisticated ideas in which chapters had made so much difference to him. It turned out it wasn’t any particular idea in any particular chapter. Until he read this book, he had simply not comprehended that an inherently non-zero-sum conflict could exist.”
1. In situations such as iterated games, there’s no in-principle pull toward defection. Prisoner’s Dilemma seems paradoxical when we first learn of it (at least, it seemed so to me) because we are not accustomed to such a harsh divide between individual incentives and the common good. But perhaps, as Sarah Constantine speculated in Don’t Shoot the Messenger, modern game theory and economics have conditioned us to be used to this conflict due to their emphasis on single-shot interactions. As a result, Moloch comes to sound like an inevitable gravity, pulling everything downwards. This is not necessarily the case.
2. Instead, most collective action problems are bargaining problems. If a solution can be agreed upon, we can generally use weak enforcement mechanisms (social norms) or strong enforcement (centralized governmental enforcement) to carry it out. But, agreeing about the solution may not be easy. The more parties involved, the more difficult.
3. Try to keep a path open toward better solutions. Since wide adoption of a particular solution can be such an important problem, there’s a tendency to treat alternative solutions as the enemy. This bars the way to further progress. (One could loosely characterize this as the difference between religious doctrine and democratic law; religious doctrine trades away the ability to improve in favor of the more powerful consensus-reaching technology of immutable universal law. But of course this oversimplifies things somewhat.) Keeping a path open for improvements is hard, partly because it can create exploitability. But it keeps us from getting stuck in a poor equilibrium.