I suspect that, in this particular example, it is more about reasoning about subgame-perfection being unintuitive (and the absence of a mechanism for punishing people who don’t punish “defectors”).
Per my reading of the OP, if someone sets their dial to 99 during a round when you’re supposed to set it to 100, then everyone sets it to 100 again the following round. Does this not count as a mechanism for punishing non-punishers?
Oh, I missed that—I thought they set it to 100 forever. In that case, I was wrong, and this indeed works as a mechanism for punishing non-punishers, at least from the mathematical point of view.
Mathematics aside, I still think the example would be clearer if there were explicit mechanisms for punishing individuals. As it is, the exact mechanism critically relies on details of the example, and on mathematical nitpicks which are unintuitive. If you instead had explicit norms, meta-norms, etc, you would avoid this. (EG, suppose anybody can punish anybody else by 1, for free. And the default is that you don’t do it, except that there is the rule for punishing rule-breakers (incl. for this rule).)
I thought the purpose of the example was to demonstrate that you can have a Nash equilibrium that is very close to the worst possible outcome. What did you think the purpose was, that would be better served by that stuff you listed?
It was partially to demonstrate that bad Nash equilibria even affect common-payoff games, there don’t even need to be dynamics of some agents singling out other agents to reward and punish.
What did you think the purpose was, that would be better served by that stuff you listed?
I think the purpose is the same thing that you say it is, an example of an equilibrium that is “very close” to the worst possible outcome. But I would additionally prefer if the example did not invoke the reaction that it critically relies on quirky mathematical details. (And I would be fine if this additional requirement came at the cost of the equilibrium being “90% of the way towards worst possible outcome”, rather than 99% of the way.)
The cost I’d be concerned about is making the example significantly more complicated.
I’m also not sure the unintuitiveness is actually bad in this case. I think there’s value in understanding examples where your intuitions don’t work, and I wouldn’t want someone to walk away with the mistaken impression that the folk theorems only predict intuitive things.
I suspect that, in this particular example, it is more about reasoning about subgame-perfection being unintuitive (and the absence of a mechanism for punishing people who don’t punish “defectors”).
Per my reading of the OP, if someone sets their dial to 99 during a round when you’re supposed to set it to 100, then everyone sets it to 100 again the following round. Does this not count as a mechanism for punishing non-punishers?
Oh, I missed that—I thought they set it to 100 forever. In that case, I was wrong, and this indeed works as a mechanism for punishing non-punishers, at least from the mathematical point of view.
Mathematics aside, I still think the example would be clearer if there were explicit mechanisms for punishing individuals. As it is, the exact mechanism critically relies on details of the example, and on mathematical nitpicks which are unintuitive. If you instead had explicit norms, meta-norms, etc, you would avoid this. (EG, suppose anybody can punish anybody else by 1, for free. And the default is that you don’t do it, except that there is the rule for punishing rule-breakers (incl. for this rule).)
I thought the purpose of the example was to demonstrate that you can have a Nash equilibrium that is very close to the worst possible outcome. What did you think the purpose was, that would be better served by that stuff you listed?
It was partially to demonstrate that bad Nash equilibria even affect common-payoff games, there don’t even need to be dynamics of some agents singling out other agents to reward and punish.
I think the purpose is the same thing that you say it is, an example of an equilibrium that is “very close” to the worst possible outcome. But I would additionally prefer if the example did not invoke the reaction that it critically relies on quirky mathematical details. (And I would be fine if this additional requirement came at the cost of the equilibrium being “90% of the way towards worst possible outcome”, rather than 99% of the way.)
The cost I’d be concerned about is making the example significantly more complicated.
I’m also not sure the unintuitiveness is actually bad in this case. I think there’s value in understanding examples where your intuitions don’t work, and I wouldn’t want someone to walk away with the mistaken impression that the folk theorems only predict intuitive things.