Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner’s Dilemma.
Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.
It is reputational systems which reward correct prediction (co-operate if and only if you predict that the other player will co-operate this time). That is because the reputational damage from defecting against a co-operator is large : the co-operator gains sympathy; the defector risks punishment or reduced co-operation from other observers. Whereas if a person who is generally known to co-operate defects against another defector, there is generally not a reputational hit (indeed there is probably a slight uplift to reputation for predicting correctly and not letting the defector get away with it).
Super-rational players co-operate if and only if the other player is super-rational. If this was the strategy that humans in fact followed (i.e. there were ways in which super-rational players could reliably recognize each other) then co-operation would be pretty near universal among humans in PDs. But it isn’t.
The empirical evidence (from this show, and other studies) is that humans play a reputational strategy rather than pure Tit-for-Tat or super-rational strategy. It appears to be what humans do, and there is a fairly convincing case it is what we’re adapted to do.
EDIT: The other evidence you quote in your article is very interesting though:
The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate. When the same researchers in the same lab didn’t tell the second player anything, 37% cooperated.
That suggests a mixture between reputational and super-rational strategies with a bit of “pure co-operate” thrown in as well. If there were a pure super-rational strategy then no-one would co-operate after hearing for sure that the other player had already co-operated. (This is unless they both knew for sure going into the game that the other player was super-rational; then they could both commit to co-operate regardless; it is equivalent in that case to counterfactual mugging, or to Newcomb with transparent boxes). Whereas if there were a pure reputational strategy, then knowing that the other player had co-operated would increase the probability of co-operating, not reduce it. Interesting.
I’m wondering if there are any game-theory models which predict a mixed equilibrium between super-rational and reputation, and whether the equilibrium allows a small % of “pure co-operators” into the mix as well?
Pure co-operate can be a reasonable strategy, even with foreknowledge of the opponent’s defection in this round, if you think your opponent is playing something close to tit-for-tat and expect to play many more rounds with them.
Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.
Agree again. Yvain is misusing terms and misrepresenting evolutionary strategies. This sequence is vastly overrated.
Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.
It is reputational systems which reward correct prediction (co-operate if and only if you predict that the other player will co-operate this time). That is because the reputational damage from defecting against a co-operator is large : the co-operator gains sympathy; the defector risks punishment or reduced co-operation from other observers. Whereas if a person who is generally known to co-operate defects against another defector, there is generally not a reputational hit (indeed there is probably a slight uplift to reputation for predicting correctly and not letting the defector get away with it).
Super-rational players co-operate if and only if the other player is super-rational. If this was the strategy that humans in fact followed (i.e. there were ways in which super-rational players could reliably recognize each other) then co-operation would be pretty near universal among humans in PDs. But it isn’t.
The empirical evidence (from this show, and other studies) is that humans play a reputational strategy rather than pure Tit-for-Tat or super-rational strategy. It appears to be what humans do, and there is a fairly convincing case it is what we’re adapted to do.
EDIT: The other evidence you quote in your article is very interesting though:
That suggests a mixture between reputational and super-rational strategies with a bit of “pure co-operate” thrown in as well. If there were a pure super-rational strategy then no-one would co-operate after hearing for sure that the other player had already co-operated. (This is unless they both knew for sure going into the game that the other player was super-rational; then they could both commit to co-operate regardless; it is equivalent in that case to counterfactual mugging, or to Newcomb with transparent boxes). Whereas if there were a pure reputational strategy, then knowing that the other player had co-operated would increase the probability of co-operating, not reduce it. Interesting.
I’m wondering if there are any game-theory models which predict a mixed equilibrium between super-rational and reputation, and whether the equilibrium allows a small % of “pure co-operators” into the mix as well?
Pure co-operate can be a reasonable strategy, even with foreknowledge of the opponent’s defection in this round, if you think your opponent is playing something close to tit-for-tat and expect to play many more rounds with them.
Agree again. Yvain is misusing terms and misrepresenting evolutionary strategies. This sequence is vastly overrated.