Each player has additional information about how the other player has played in the past. I wasn’t trying to say that iterated PD for N=100 rounds becomes N=0, I was saying it becomes N=99, followed by one straight game.
Also, new information about how a player has behaved in radically different circumstances is different from being able to rationally update what they will do in the future. You have never encountered the agent before you in circumstances where they are interacting with you for the last time, ever.
Each player has additional information about how the other player has played in the past. I wasn’t trying to say that iterated PD for N=100 rounds becomes N=0, I was saying it becomes N=99, followed by one straight game.
And that is inaccurate, because your decision in round 99 may affect Clippy’s decision in round 100. There’s no rule anywhere that says Clippy isn’t, for example, assuming that your decision making processes are similar, and that if it decided to cooperate the last round after 99 identical turns, there’d be a good chance that you’d cooperate as well because of that. Sure, that’s not a very likely scenario, and obviously you should always defect the last round—but this shows why N=100 does not ever become N=99.
(I’m not very familiar with EDT, but it seems like a decision theory that would be prone to not defecting the last round following identical 99 rounds).
And my decision in round 99 is also part if Iterated Prisoners Dilemma. My decision in round 100 is no longer iterated PD, but normal PD with additional information about how my partner played a IPD.
Key feature of PD as opposed to IPD: In Prisoner’s Dilemma, you will never, ever, interact with the other player again. If that’s a possibility, then you are playing a game with many similarities but a different premise.
As you said, a key feature of PD is that you’re not ever going to interact with the other player again, so while the last round may perhaps be interpreted as PD, the second-to-last round may not.
Of course you could just as well argue that a key feature of PD is also that you have never interacted with the other player before. That’s my point of view, but in the end this is an academic question.
It doesn’t matter: 100-round IPD only becomes 99-round IPD if you have 100% confidence that Clippy’s decision in round 100 is not in any way causally related to your actual decisions in rounds 1..99.
If I pick 100 people randomly off the street and let them play ordinary PD, how many do you think will cooperate, even though it may not make sense to you or me? And here you’re playing with a paperclip maximizer you know nothing about.
I really don’t think you should have that kind of confidence.
I don’t think having no information about the other player is part of PD. If you do, then it’s not academic at all- it’s a key difference in a definitional distinction that is important!
After those 99 rounds have been played, is the game PD or isn’t it?
Oh, and if you pick me to participate in the closest approximation of PD that you can provide, I will cooperate, take my reward (if any), and then explain that the differences between the approximation and actual PD were key to my decision- because I prefer to live in a world where cooperation happens in pseudo-PD situations.
I’m not in an abstract game. I only play games with some concrete aspect. If you asked me, on the street, to play a game with a stranger and I recognized the PD setup, I would participate, but I would not be playing prisoner’s dilemma- I would be playing a meta-version which also has meta-payoffs.
Each player has additional information about how the other player has played in the past. I wasn’t trying to say that iterated PD for N=100 rounds becomes N=0, I was saying it becomes N=99, followed by one straight game.
Also, new information about how a player has behaved in radically different circumstances is different from being able to rationally update what they will do in the future. You have never encountered the agent before you in circumstances where they are interacting with you for the last time, ever.
And that is inaccurate, because your decision in round 99 may affect Clippy’s decision in round 100. There’s no rule anywhere that says Clippy isn’t, for example, assuming that your decision making processes are similar, and that if it decided to cooperate the last round after 99 identical turns, there’d be a good chance that you’d cooperate as well because of that. Sure, that’s not a very likely scenario, and obviously you should always defect the last round—but this shows why N=100 does not ever become N=99.
(I’m not very familiar with EDT, but it seems like a decision theory that would be prone to not defecting the last round following identical 99 rounds).
And my decision in round 99 is also part if Iterated Prisoners Dilemma. My decision in round 100 is no longer iterated PD, but normal PD with additional information about how my partner played a IPD.
Key feature of PD as opposed to IPD: In Prisoner’s Dilemma, you will never, ever, interact with the other player again. If that’s a possibility, then you are playing a game with many similarities but a different premise.
As you said, a key feature of PD is that you’re not ever going to interact with the other player again, so while the last round may perhaps be interpreted as PD, the second-to-last round may not.
Of course you could just as well argue that a key feature of PD is also that you have never interacted with the other player before. That’s my point of view, but in the end this is an academic question.
It doesn’t matter: 100-round IPD only becomes 99-round IPD if you have 100% confidence that Clippy’s decision in round 100 is not in any way causally related to your actual decisions in rounds 1..99.
If I pick 100 people randomly off the street and let them play ordinary PD, how many do you think will cooperate, even though it may not make sense to you or me? And here you’re playing with a paperclip maximizer you know nothing about.
I really don’t think you should have that kind of confidence.
I don’t think having no information about the other player is part of PD. If you do, then it’s not academic at all- it’s a key difference in a definitional distinction that is important!
After those 99 rounds have been played, is the game PD or isn’t it?
Oh, and if you pick me to participate in the closest approximation of PD that you can provide, I will cooperate, take my reward (if any), and then explain that the differences between the approximation and actual PD were key to my decision- because I prefer to live in a world where cooperation happens in pseudo-PD situations.
No, it isn’t. But someone who defects in ordinary PD might defect the last round in IPD for the same reasons. I certainly would.
You are looking for excuses instead of considering the least convenient possible world. Would you cooperate in this problem?
I’m not in an abstract game. I only play games with some concrete aspect. If you asked me, on the street, to play a game with a stranger and I recognized the PD setup, I would participate, but I would not be playing prisoner’s dilemma- I would be playing a meta-version which also has meta-payoffs.