Emotional pay-offs aren’t the ones which evolution cares about. Emotional commitments are an evolved mechanism which makes us cooperate more, but it wouldn’t have evolved if more cooperation wasn’t advantageous in the first place.
There are two senses in which commitments could be said to change the payoffs.
Suppose Anne has an emotional commitment mechanism (the full range of gratitude, loyalty, anger, vengeance and so on). Then the subjective utility cost to Anne of defecting against Bob (who is co-operating) is high: it really feels bad. This is the sort of payoff that humans care about, but evolution does not.
But the fact that Anne has this commitment mechanism also changes the objective payoffs to Bob, namely the likelihood that he survives by co-operating or defecting, or the expected number of his offspring and other relatives; basically the Darwinian utility function for Bob (inclusive fitness). This is the part that matters for evolution (at least biological evolution), and is the sense in which the game has shifted so it is no longer a true Prisoner’s Dilemma for Bob.
Yes, but does mentioning emotional commitment in the latter sense really help to answer the question of why (apparently) non-Nash strategies have evolved? There is no practical difference for Bob whether Anne plays TFT because her emotional commitment or because a pure game-theoretical calculation. In the last turn Bob should defect. Or put another way: how did emotional commitment first arise?
Emotional commitment arises because the 100% foolproof (but, unfortunately, difficult) way to win a Prisoner’s Dilemma is to credibly pre-commit to a strategy. Anne’s (why not Alice’s?) emotional reasons to play TFT are in effect a way to pre-commit to playing TFT; the most Bob can do is defect on the last turn.
If, on the other hand, Anne simply plays TFT because she thinks it’s the smart thing to do, then the defect-on-the-last-X-turns strategy can escalate and result in everyone defecting. For that matter, Bob could try something like “If you cooperate when I defect, I’ll sometimes cooperate… maybe” and test Anne’s stubbornness.
Emotional pay-offs aren’t the ones which evolution cares about. Emotional commitments are an evolved mechanism which makes us cooperate more, but it wouldn’t have evolved if more cooperation wasn’t advantageous in the first place.
There are two senses in which commitments could be said to change the payoffs.
Suppose Anne has an emotional commitment mechanism (the full range of gratitude, loyalty, anger, vengeance and so on). Then the subjective utility cost to Anne of defecting against Bob (who is co-operating) is high: it really feels bad. This is the sort of payoff that humans care about, but evolution does not.
But the fact that Anne has this commitment mechanism also changes the objective payoffs to Bob, namely the likelihood that he survives by co-operating or defecting, or the expected number of his offspring and other relatives; basically the Darwinian utility function for Bob (inclusive fitness). This is the part that matters for evolution (at least biological evolution), and is the sense in which the game has shifted so it is no longer a true Prisoner’s Dilemma for Bob.
Yes, but does mentioning emotional commitment in the latter sense really help to answer the question of why (apparently) non-Nash strategies have evolved? There is no practical difference for Bob whether Anne plays TFT because her emotional commitment or because a pure game-theoretical calculation. In the last turn Bob should defect. Or put another way: how did emotional commitment first arise?
Emotional commitment arises because the 100% foolproof (but, unfortunately, difficult) way to win a Prisoner’s Dilemma is to credibly pre-commit to a strategy. Anne’s (why not Alice’s?) emotional reasons to play TFT are in effect a way to pre-commit to playing TFT; the most Bob can do is defect on the last turn.
If, on the other hand, Anne simply plays TFT because she thinks it’s the smart thing to do, then the defect-on-the-last-X-turns strategy can escalate and result in everyone defecting. For that matter, Bob could try something like “If you cooperate when I defect, I’ll sometimes cooperate… maybe” and test Anne’s stubbornness.