I think group selection may have something to recommend to it here.
Let’s say that your odds of reproduction go up 2% if you Defect and the other person’s go down by 1%, and that the other person’s odds of reproduction go up 1% if you cooperate. This creates a pretty standard PD scenario: If you both Cooperate, you both get 1% added to your chance to pass on your genes. If you Cooperate and they Defect, they get a 3% bonus and you take a 1% risk. This reverses if you Defect and they Cooperate. If you both Defect, both of your odds of reproduction go up 1%.
Defectbots quickly take over CooperateBots in this case, but don’t beat out TFT. But it still doesn’t explain why TFT wouldn’t just Defect against TFT.
I can think of a few reasons why it might not, in the case of humans.
For one, whatever algorithm is running might not model it as a finite scenario. This isnt accurate, but it might be beneficial anyway (space isn’t flat, but we model it as flat anyway because it was more efficient for hunting gazelle) so the inaccuracy might have helped groups implementing a TFT-like algorithm.
For another (as GeraldMonroe points out) most scenarios aren’t PD scenarios. And it’s unlikely that we have one part of our brain to help just with PD and another for all other interactions. That’d be more expensive, as far as brains go. So we probably just use the one sort of reasoning, even when it isn’t appropriate.
Third, sometimes we do act in ways that TFT would predict. Waitresses being tipped less if they work on highways might not be consistent with TFT (since they shouldn’t get tipped at all) but it also isn’t consistent with saying that people cooperate the same amount. (since then we wouldn’t see a difference) If I had to guess, I’d say that we have some alogrithm that makes an estimate on how likely the scenario is to affect us, probably weighing the counterfactual (we would want to get tipped if we were the waitress) vs. TFT. (we don’t expect to benefit from tipping)
I think group selection may have something to recommend to it here.
Let’s say that your odds of reproduction go up 2% if you Defect and the other person’s go down by 1%, and that the other person’s odds of reproduction go up 1% if you cooperate. This creates a pretty standard PD scenario: If you both Cooperate, you both get 1% added to your chance to pass on your genes. If you Cooperate and they Defect, they get a 3% bonus and you take a 1% risk. This reverses if you Defect and they Cooperate. If you both Defect, both of your odds of reproduction go up 1%.
Defectbots quickly take over CooperateBots in this case, but don’t beat out TFT. But it still doesn’t explain why TFT wouldn’t just Defect against TFT.
I can think of a few reasons why it might not, in the case of humans.
For one, whatever algorithm is running might not model it as a finite scenario. This isnt accurate, but it might be beneficial anyway (space isn’t flat, but we model it as flat anyway because it was more efficient for hunting gazelle) so the inaccuracy might have helped groups implementing a TFT-like algorithm.
For another (as GeraldMonroe points out) most scenarios aren’t PD scenarios. And it’s unlikely that we have one part of our brain to help just with PD and another for all other interactions. That’d be more expensive, as far as brains go. So we probably just use the one sort of reasoning, even when it isn’t appropriate.
Third, sometimes we do act in ways that TFT would predict. Waitresses being tipped less if they work on highways might not be consistent with TFT (since they shouldn’t get tipped at all) but it also isn’t consistent with saying that people cooperate the same amount. (since then we wouldn’t see a difference) If I had to guess, I’d say that we have some alogrithm that makes an estimate on how likely the scenario is to affect us, probably weighing the counterfactual (we would want to get tipped if we were the waitress) vs. TFT. (we don’t expect to benefit from tipping)