Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.
Thanks.
Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.