Most problems which initially seem like Prisoner’s Dilemma are actually Stag Hunt, because there are potential enforcement mechanisms available.
I’m not sure I follow, can you elaborate?
Is the idea that everyone can attempt to enforce norms of “cooperate in the PD” (stag), or not enforce those norms (rabbit)? And if you have enough “stag” players to successfully “hunt a stag”, then defecting in the PD becomes costly and rare, so the original PD dynamics mostly drop out?
If so, I kind of feel like I’d still model the second level game as a PD rather than a stag hunt? I’m not sure though, and before I chase that thread, I’ll let you clarify whether that’s actually what you meant.
By “is a PD”, I mean, there is a cooperative solution which is better than any Nash equilibrium. In some sense, the self-interest of the players is what prevents them from getting to the better solution.
By “is a SH”, I mean, there is at least one good cooperative solution which is an equilibrium, but there are also other equilibria which are significantly worse. Some of the worse outcomes can be forced by unilateral action, but the better outcomes require coordinated action (and attempted-but-failed coordination is even worse than the bad solutions).
In iterated PD (with the right assumptions, eg appropriately high probabilities of the game continuing after each round), tit-for-tat is an equilibrium strategy which results in a pure-cooperation outcome. The remaining difficulty of the game is the difficulty of ending up in that equilibrium. There are many other equilibria which one could equally well end up in, including total mutual defection. In that sense, iteration can turn a PD into a SH.
Other modifications, such as commitment mechanisms or access to the other player’s source code, can have similar effects.
In the specific case of iteration, I’m not sure that works so well for multiplayer games? It would depend on details, but e.g. if a player’s only options are “cooperate” or “defect against everyone equally”, then… mm, I guess “cooperate iff everyone else cooperated last round” is still stable, just a lot more fragile than with two players.
But you did say it’s difficult, so I don’t think I’m disagreeing with you. The PD-ness of it still feels more salient to me than the SH-ness, but I’m not sure that particularly means anything.
I think actually, to me the intuitive core of a PD is “players can capture value by destroying value on net”. And I hadn’t really thought about the core of SH prior to this post, but I think I was coming around to something like threshold effects; “players can try to capture value for themselves [it’s not really important whether that’s net positive or net negative]; but at a certain fairly specific point, it’s strongly net negative”. Under these intuitions, there’s nothing stopping a game from being both PD and SH.
Not sure I’m going anywhere with this, and it feels kind of close to just arguing over definitions.
I’m not sure I follow, can you elaborate?
Is the idea that everyone can attempt to enforce norms of “cooperate in the PD” (stag), or not enforce those norms (rabbit)? And if you have enough “stag” players to successfully “hunt a stag”, then defecting in the PD becomes costly and rare, so the original PD dynamics mostly drop out?
If so, I kind of feel like I’d still model the second level game as a PD rather than a stag hunt? I’m not sure though, and before I chase that thread, I’ll let you clarify whether that’s actually what you meant.
By “is a PD”, I mean, there is a cooperative solution which is better than any Nash equilibrium. In some sense, the self-interest of the players is what prevents them from getting to the better solution.
By “is a SH”, I mean, there is at least one good cooperative solution which is an equilibrium, but there are also other equilibria which are significantly worse. Some of the worse outcomes can be forced by unilateral action, but the better outcomes require coordinated action (and attempted-but-failed coordination is even worse than the bad solutions).
In iterated PD (with the right assumptions, eg appropriately high probabilities of the game continuing after each round), tit-for-tat is an equilibrium strategy which results in a pure-cooperation outcome. The remaining difficulty of the game is the difficulty of ending up in that equilibrium. There are many other equilibria which one could equally well end up in, including total mutual defection. In that sense, iteration can turn a PD into a SH.
Other modifications, such as commitment mechanisms or access to the other player’s source code, can have similar effects.
Thanks, that makes sense.
Rambling:
In the specific case of iteration, I’m not sure that works so well for multiplayer games? It would depend on details, but e.g. if a player’s only options are “cooperate” or “defect against everyone equally”, then… mm, I guess “cooperate iff everyone else cooperated last round” is still stable, just a lot more fragile than with two players.
But you did say it’s difficult, so I don’t think I’m disagreeing with you. The PD-ness of it still feels more salient to me than the SH-ness, but I’m not sure that particularly means anything.
I think actually, to me the intuitive core of a PD is “players can capture value by destroying value on net”. And I hadn’t really thought about the core of SH prior to this post, but I think I was coming around to something like threshold effects; “players can try to capture value for themselves [it’s not really important whether that’s net positive or net negative]; but at a certain fairly specific point, it’s strongly net negative”. Under these intuitions, there’s nothing stopping a game from being both PD and SH.
Not sure I’m going anywhere with this, and it feels kind of close to just arguing over definitions.