This seems to be phrased like a disagreement, but I think you’re mostly saying things that are addressed in the original post. It is totally fair to say that things wouldn’t go down like this if you stuck 100 actual prisoners or mathematicians or whatever into this scenario. I don’t believe OP was trying to claim that it would. The point is just that sometimes bad equilibria can form from everyone following simple, seemingly innocuous rules. It is a faithful execution of certain simple strategic approaches, but it is a bad strategy in situations like this because it fails to account for things like modeling the preferences/behavior of other agents.
To address your scenario:
Alice breaks it unilaterally on round 1, then Bob notices that and joins in on round 2, neither of them end up punished and they get 98.6 from then on
Ya, sure this could happen “in real life”, but the important part is that this solution assumes that Alice breaking the equilibrium on round 1 is evidence that she’ll break it on round 2. This is exactly why the character Rowan asks:
“If you’ve just seen someone else violate the equilibrium, though, shouldn’t you rationally expect that they might defect from the equilibrium in the future?”
and it is yields the response that
“Well, yes. This is a limitation of Nash equilibrium as an analysis tool, if you weren’t already convinced it needed revisiting based on this terribly unnecessarily horrible outcome in this situation. …”
This is followed by discussion of how we might add mathematical elements to account for predicting the behavior of other agents.
Humans predict the behavior of other agents automatically and would not be likely to get stuck in this particular bad equilibrium. That said, I still think this is an interesting toy example because it’s kind of similar to some bad equilibria which humans DO get stuck in (see thesecomments for example). It would be interesting to learn more about the mathematics and try to pinpoint what makes these failure modes more/less likely to occur.
This seems to be phrased like a disagreement, but I think you’re mostly saying things that are addressed in the original post. It is totally fair to say that things wouldn’t go down like this if you stuck 100 actual prisoners or mathematicians or whatever into this scenario. I don’t believe OP was trying to claim that it would. The point is just that sometimes bad equilibria can form from everyone following simple, seemingly innocuous rules. It is a faithful execution of certain simple strategic approaches, but it is a bad strategy in situations like this because it fails to account for things like modeling the preferences/behavior of other agents.
To address your scenario:
Ya, sure this could happen “in real life”, but the important part is that this solution assumes that Alice breaking the equilibrium on round 1 is evidence that she’ll break it on round 2. This is exactly why the character Rowan asks:
and it is yields the response that
This is followed by discussion of how we might add mathematical elements to account for predicting the behavior of other agents.
Humans predict the behavior of other agents automatically and would not be likely to get stuck in this particular bad equilibrium. That said, I still think this is an interesting toy example because it’s kind of similar to some bad equilibria which humans DO get stuck in (see these comments for example). It would be interesting to learn more about the mathematics and try to pinpoint what makes these failure modes more/less likely to occur.