This is actually much more like “guess 2⁄3 of the average” than tragedy of the commons or the prisoner’s dilemma, in that there’s an obvious Nash equilibrium that there really shouldn’t be any reason to deviate from, except that you need to take into account the foolishness of the other players. They don’t offer any possible means for people to correlate their moves with their opponents’ moves, so the only reason to ever cooperate would be if you expect other players who don’t understand game theory to be more likely to cooperate with you if your reputation is greater than 0. You don’t want to cooperate if and only if the fact that you are doing so implies that they will be more likely to cooperate with you, because you can’t (or at least, any such effect would necessarily benefit people besides you just as much, and would thus be worthless, since it’s a zero-sum game). And you don’t want to cooperate with people if and only if you expect them to cooperate with you, because (C,C)+(D,D) gives you the same payoff as (C,D)+(D,C), so although you want as many cooperations as possible from your opponents, and as many defections as possible by you, there’s no incentive to correlate them. So if you need to have a positive reputation in order to attract cooperations from players who don’t know what they’re doing, you may as well just cooperate against players that you think are doing poorly (which would probably be either players whose reputation is too high, and who are thus probably sacrificing too much, or whose reputation is too low, and who are thus not getting cooperations from people who think they should be cooperating with people with high reputations, or both), because these players won’t be your main competition at the end, so helping them is harmless. Cooperation early in the game is better than cooperation late in the game because it affects your reputation for a larger portion of the game (plus, one cooperation has a larger effect on early-game reputation than it does on late-game reputation).
There are ways of determining which player is which, vaguely. You can keep track of reputation between rounds, and infer that player 5 from last round couldn’t possibly be player 23 from this round because that would require player 5 to have cooperated with more people than there are players. Alternatively, my bot could ensure the number of times its hunted in history is always a multiple of 17, and other smart bots could look for this.
the only reason to ever cooperate would be if you expect other players who don’t understand game theory to be more likely to cooperate with you if your reputation is greater than 0.
If there are only two kinds of players, those who slack all the time, and those who cooperate on the first round, and then only with anybody with a positive reputation—than the second group will blow the first out of the water. Saying that the winners “don’t understand game theory” sounds a bit silly.
If there is a third kind of player, which cooperates on the first round and then slacks thereafter, then the third group will blow the second out of the water. The second group only wins because no one bothered exploiting them in your example, even though anyone easily could have.
Sure, but then you can add a fourth kind of player, who hunts with those with reputation equal or higher than themselves, it probably beats all three others (though the outcome might depend on the initial mix, if there are more 2 than 4, 3 might exploit enough 2 to beat 4).
And then other strategies can beat that. There are plenty of “nice” strategies that are less foolish than “always slack”.
Good call, I was pretty sure that there weren’t any Nash equilibria other than constant slacking, but everyone using group 4′s strategy is also a Nash equilibrium, as is everyone hunting with those with reputation is exactly equal to their own. This makes group 4 considerably harder to exploit, although it is possible in most likely distributions of players if you know it well enough. As you say, group 4 is less foolish than the slackers if there are enough of them. I still think that in practice, strategies that could be part of a Nash equilibrium won’t win, because their success relies on having many identical copies of them.
If there are two kinds of players, those who throw rock, and those who throw paper, the latter will blow the former out of the the water.
You are engaging in two fallacies: you are cherry-picking conditions to favor your particular strategy, and you are evaluating the strategies at the wrong level. Strategies should be evaluated with respect to how the affect the success of the individual person employing them, not on how they affect the success of people, in general, who employ them. This fallacy is behind much of the cooperate/one-box arguments. Sure, if everyone in Group B cooperates with other members of Group B, then Group B will do better, and on a superficial level, it seems like this means “If you’re in Group B, you should cooperate with other members of Group B”, but that’s fallacious reasoning. It’s the sort of thing that lies behind identity politics. “If Americans buy American, then Americans will do better, and you’re an American, so you will benefit from buying American”. Even if we grant that buying American gives a net benefit to America (which is a rather flimsy premise to begin with), it doesn’t follow that any American has a rational reason to buy American. In your scenario, the presence of people with the “cooperate with people who have a reputation greater than 0” provides a reason to cooperate in the first round, but there is no reason whatsoever to condition cooperation on someone having a reputation greater than 0. Anyone who, in this scenario, thinks that one should cooperate with people with reputation greater than 0 does indeed not understand game theory.
You are engaging in two fallacies: you are cherry-picking conditions to favor your particular strategy, and you are evaluating the strategies at the wrong level.
No, I’m simplifying for arguments’ sake, using the example given by Alex (cooperating with any positive reputation). I discuss more complex strategies elsewhere in the thread, of course “cooperate only with people with > 0 reputation is a pretty stupid and exploitable strategy, my point is that even such a stupid strategy could beat Alex’s “always defect”.
This is actually much more like “guess 2⁄3 of the average” than tragedy of the commons or the prisoner’s dilemma, in that there’s an obvious Nash equilibrium that there really shouldn’t be any reason to deviate from, except that you need to take into account the foolishness of the other players. They don’t offer any possible means for people to correlate their moves with their opponents’ moves, so the only reason to ever cooperate would be if you expect other players who don’t understand game theory to be more likely to cooperate with you if your reputation is greater than 0. You don’t want to cooperate if and only if the fact that you are doing so implies that they will be more likely to cooperate with you, because you can’t (or at least, any such effect would necessarily benefit people besides you just as much, and would thus be worthless, since it’s a zero-sum game). And you don’t want to cooperate with people if and only if you expect them to cooperate with you, because (C,C)+(D,D) gives you the same payoff as (C,D)+(D,C), so although you want as many cooperations as possible from your opponents, and as many defections as possible by you, there’s no incentive to correlate them. So if you need to have a positive reputation in order to attract cooperations from players who don’t know what they’re doing, you may as well just cooperate against players that you think are doing poorly (which would probably be either players whose reputation is too high, and who are thus probably sacrificing too much, or whose reputation is too low, and who are thus not getting cooperations from people who think they should be cooperating with people with high reputations, or both), because these players won’t be your main competition at the end, so helping them is harmless. Cooperation early in the game is better than cooperation late in the game because it affects your reputation for a larger portion of the game (plus, one cooperation has a larger effect on early-game reputation than it does on late-game reputation).
There are ways of determining which player is which, vaguely. You can keep track of reputation between rounds, and infer that player 5 from last round couldn’t possibly be player 23 from this round because that would require player 5 to have cooperated with more people than there are players. Alternatively, my bot could ensure the number of times its hunted in history is always a multiple of 17, and other smart bots could look for this.
If there are only two kinds of players, those who slack all the time, and those who cooperate on the first round, and then only with anybody with a positive reputation—than the second group will blow the first out of the water. Saying that the winners “don’t understand game theory” sounds a bit silly.
If there is a third kind of player, which cooperates on the first round and then slacks thereafter, then the third group will blow the second out of the water. The second group only wins because no one bothered exploiting them in your example, even though anyone easily could have.
Sure, but then you can add a fourth kind of player, who hunts with those with reputation equal or higher than themselves, it probably beats all three others (though the outcome might depend on the initial mix, if there are more 2 than 4, 3 might exploit enough 2 to beat 4).
And then other strategies can beat that. There are plenty of “nice” strategies that are less foolish than “always slack”.
Good call, I was pretty sure that there weren’t any Nash equilibria other than constant slacking, but everyone using group 4′s strategy is also a Nash equilibrium, as is everyone hunting with those with reputation is exactly equal to their own. This makes group 4 considerably harder to exploit, although it is possible in most likely distributions of players if you know it well enough. As you say, group 4 is less foolish than the slackers if there are enough of them. I still think that in practice, strategies that could be part of a Nash equilibrium won’t win, because their success relies on having many identical copies of them.
If there are two kinds of players, those who throw rock, and those who throw paper, the latter will blow the former out of the the water.
You are engaging in two fallacies: you are cherry-picking conditions to favor your particular strategy, and you are evaluating the strategies at the wrong level. Strategies should be evaluated with respect to how the affect the success of the individual person employing them, not on how they affect the success of people, in general, who employ them. This fallacy is behind much of the cooperate/one-box arguments. Sure, if everyone in Group B cooperates with other members of Group B, then Group B will do better, and on a superficial level, it seems like this means “If you’re in Group B, you should cooperate with other members of Group B”, but that’s fallacious reasoning. It’s the sort of thing that lies behind identity politics. “If Americans buy American, then Americans will do better, and you’re an American, so you will benefit from buying American”. Even if we grant that buying American gives a net benefit to America (which is a rather flimsy premise to begin with), it doesn’t follow that any American has a rational reason to buy American. In your scenario, the presence of people with the “cooperate with people who have a reputation greater than 0” provides a reason to cooperate in the first round, but there is no reason whatsoever to condition cooperation on someone having a reputation greater than 0. Anyone who, in this scenario, thinks that one should cooperate with people with reputation greater than 0 does indeed not understand game theory.
No, I’m simplifying for arguments’ sake, using the example given by Alex (cooperating with any positive reputation). I discuss more complex strategies elsewhere in the thread, of course “cooperate only with people with > 0 reputation is a pretty stupid and exploitable strategy, my point is that even such a stupid strategy could beat Alex’s “always defect”.