I’m not sure I see your point. My reasoning was that if you meet the same person on average every thousand games in an infinite series of games, you’ll end up meeting them an infinite number of times. Am I confusing the sample space with the event space?
If you have a strong discount factor, then even if you meet the same person infinitely often, your gain is still bounded above (summing a geometric series), and can be much smaller than winning your current round.
How can R/(1-p) diminish when R and p are constant? Are you discounting future games as worth less than this game, and is that consistent with the scoring of iterated prisoner’s dilemma?
Are you discounting future games as worth less than this game
Yes, that’s what discounting does. If you have a discounted iterated PD, you have to do something like that. And it R/(1-p) is smaller than profiteering in your current interaction, you’ll profiteer in your current action.
Is that consistent with the scoring of iterated prisoners’ dilemma, or is it a different game? The goal of abstract games is to maximize one’s score at the end of the game (or in infinite games, maximize the average score per time across infinite time)
The expected score of a discounting defector with per-round discount fraction p versus a cooperate-then reciprocate player in the [3,4;1,2] matrix after n-1 rounds would be 4+sum_{r=1}{n}2rp.The expected score of a cooperate-then reciprocate player against the same opponent would be 3+sum_{r=1}{n}3rp.
A quick estimate says that for a p of .5, the two scores are the same over infinite time.
I’m not sure I see your point. My reasoning was that if you meet the same person on average every thousand games in an infinite series of games, you’ll end up meeting them an infinite number of times. Am I confusing the sample space with the event space?
If you have a strong discount factor, then even if you meet the same person infinitely often, your gain is still bounded above (summing a geometric series), and can be much smaller than winning your current round.
face-palm Ah yes. Thanks.
How can R/(1-p) diminish when R and p are constant? Are you discounting future games as worth less than this game, and is that consistent with the scoring of iterated prisoner’s dilemma?
Yes, that’s what discounting does. If you have a discounted iterated PD, you have to do something like that. And it R/(1-p) is smaller than profiteering in your current interaction, you’ll profiteer in your current action.
Is that consistent with the scoring of iterated prisoners’ dilemma, or is it a different game? The goal of abstract games is to maximize one’s score at the end of the game (or in infinite games, maximize the average score per time across infinite time)
The expected score of a discounting defector with per-round discount fraction p versus a cooperate-then reciprocate player in the [3,4;1,2] matrix after n-1 rounds would be 4+sum_{r=1}{n}2rp.The expected score of a cooperate-then reciprocate player against the same opponent would be 3+sum_{r=1}{n}3rp.
A quick estimate says that for a p of .5, the two scores are the same over infinite time.
It is, for the reasons you suggest, a different game.