“Stable Nash equilibrium” is a term-of-art that I don’t think you meant to evoke, but it’s true that you can reach better states if multiple people act in concert. Saying this is a Nash equilibrium only means that no single player can do better, if you assume that everyone else is a robot that is guaranteed to keep following their current strategy no matter what.
This equilibrium is a local maximum surrounded by a tiny moat of even-worse outcomes. The moat is very thin, and almost everything beyond it is better than this, but you need to pass through the even-worse moat in order to get to anything better. (And you can’t cross the moat unilaterally.)
Of course, it’s easy to vary the parameters of this thought-experiment to make the moat wider. If you set the equilibrium at 98 instead of 99, then you’d need 3 defectors to do better, instead of only 2; etc.
So you can say “this is such an extreme example that I don’t expect real humans to actually follow it”, but that’s only a difference in degree, not a difference in kind. It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It is, in theory, but I feel like this underrates the real reason for most such situations: actual asymmetries in values, information, or both. A few things that may hold an otherwise pointless taboo or rule in place:
it serving as a shibboleth that identifies the in-group. This is a tangible benefit in certain situations. It’s true that another could be chosen and it could be something that is also more inherently worthy rather than just conventionally picked, but that requires time and adjusting and may create confusion
it being tied to some religious or ideological worldview such that at least some people genuinely believe it’s beneficial, and not just a convention. That makes them a lot more resistant to dropping it even if there was an attempt at coordination
it having become something that is genuinely unpleasant to drop even at an individual level simply because force of habit has led some individuals to internalize it.
In general I think the game theoretical model honestly doesn’t represent anything like a real world situation well because it creates a situation that’s so abstract and extreme, it’s impossible to imagine any of these dynamics at work. Even the worst, most dystopian totalitarianism in which everyone spies on everyone else and everyone’s life is miserable will at least have been started by a group of true believers who think this is genuinely a good thing.
I contend examples are easy to find even after you account for all of those things you listed. If you’d like a more in-depth exploration of this topic, you might be interested in the book Inadequate Equilibria.
I’ve read Inadequate Equilibria, but that’s exactly the thing, this specific example doesn’t really convey that sort of situation. At the very least, some social interaction as well as the path to the pathological equilibrium are crucial to it. By stripping down all of that, the 99 C example makes no sense. They’re an integral part of why such things happen.
“Stable Nash equilibrium” is a term-of-art that I don’t think you meant to evoke, but it’s true that you can reach better states if multiple people act in concert. Saying this is a Nash equilibrium only means that no single player can do better, if you assume that everyone else is a robot that is guaranteed to keep following their current strategy no matter what.
This equilibrium is a local maximum surrounded by a tiny moat of even-worse outcomes. The moat is very thin, and almost everything beyond it is better than this, but you need to pass through the even-worse moat in order to get to anything better. (And you can’t cross the moat unilaterally.)
Of course, it’s easy to vary the parameters of this thought-experiment to make the moat wider. If you set the equilibrium at 98 instead of 99, then you’d need 3 defectors to do better, instead of only 2; etc.
So you can say “this is such an extreme example that I don’t expect real humans to actually follow it”, but that’s only a difference in degree, not a difference in kind. It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It is, in theory, but I feel like this underrates the real reason for most such situations: actual asymmetries in values, information, or both. A few things that may hold an otherwise pointless taboo or rule in place:
it serving as a shibboleth that identifies the in-group. This is a tangible benefit in certain situations. It’s true that another could be chosen and it could be something that is also more inherently worthy rather than just conventionally picked, but that requires time and adjusting and may create confusion
it being tied to some religious or ideological worldview such that at least some people genuinely believe it’s beneficial, and not just a convention. That makes them a lot more resistant to dropping it even if there was an attempt at coordination
it having become something that is genuinely unpleasant to drop even at an individual level simply because force of habit has led some individuals to internalize it.
In general I think the game theoretical model honestly doesn’t represent anything like a real world situation well because it creates a situation that’s so abstract and extreme, it’s impossible to imagine any of these dynamics at work. Even the worst, most dystopian totalitarianism in which everyone spies on everyone else and everyone’s life is miserable will at least have been started by a group of true believers who think this is genuinely a good thing.
I contend examples are easy to find even after you account for all of those things you listed. If you’d like a more in-depth exploration of this topic, you might be interested in the book Inadequate Equilibria.
I’ve read Inadequate Equilibria, but that’s exactly the thing, this specific example doesn’t really convey that sort of situation. At the very least, some social interaction as well as the path to the pathological equilibrium are crucial to it. By stripping down all of that, the 99 C example makes no sense. They’re an integral part of why such things happen.
That’s correct, but that just makes this a worse (less intuitive) version of the stag hunt.