Of course, the whole problem with TDT-ish arguments is that we have very little principled foundation of how to reason when two actors are quite imperfect decision-theoretic copies of each other (like the U.S. and China almost definitely are). This makes technical analysis of the domains where the effects from this kind of stuff is large quite difficult.
I think the inductive argument just isn’t that strong, when dealing with real agents. If, for whatever reason, you believe that your counterpart will respond in a tit-for-tat manner even in a finite-round PD, even if that’s not a Nash equilibrium strategy, your best response is not necessarily to defect. So CDT in a vacuum doesn’t prescribe always-defect, you need assumptions about the players’ beliefs, and I think the assumption of Nash equilibrium or common knowledge of backward induction + iterated deletion of dominated strategies is questionable.
Also, of course, CDT agents can use conditional commitment + coordination devices.
the whole problem with TDT-ish arguments is that we have very little principled foundation of how to reason when two actors are quite imperfect decision-theoretic copies of each other
I think you can get cooperation on an iterated prisoners dilemma if there’s some probability p that you play another round, if p is high enough—you just can’t know at the outset exactly how many rounds there are going to be.
Yep, it’s definitely possible to get cooperation in a pure CDT-frame, but it IMO is also clearly silly how sensitive the cooperative equilibrium is to things like this (and also doesn’t track how I think basically any real-world decision-making happens).
I do think that iterated with some unknown number of iterations is better than either single round or n-rounds at approximating what real world situations look like (and gets the more realistic result that cooperation is possible).
I agree that people are mostly not writing out things out this way when they’re making real world decisions, but that applies equally to CDT and TDT, and being sensitive to small things like this seems like a fully general critique of game theory.
To be clear, uncertainty about the number of iterations isn’t enough. You need to have positive probability on arbitrarily high numbers of iterations, and never have it be the case that the probability of p(>n rounds) is so much less than p(n rounds) that it’s worth defecting on round n regardless of the effect of your reputation. These are pretty strong assumptions.
So cooperation is crucially dependent on your belief that all the way from 10 rounds to Graham’s number of rounds (and beyond), the probability of >n rounds conditional on n rounds is never lower than e.g. 20% (or whatever number is implied by the pay-off structure of your game).
Huh, I do think the “correct” game theory is not sensitive in these respects (indeed, all LDTs cooperate in a 1-shot mirrored prisoner’s dilemma). I agree that of course you want to be sensitive to some things, but the kind of sensitivity here seems silly.
Sure, you can think about this stuff in a CDT framework (especially over iterated games), though it is really quite hard. Remember, the default outcome in a n-round prisoners dilemma in CDT is still constant defect, because you just argue inductively that you will definitely be defected on in the last round. So it being single shot isn’t necessary.
Of course, the whole problem with TDT-ish arguments is that we have very little principled foundation of how to reason when two actors are quite imperfect decision-theoretic copies of each other (like the U.S. and China almost definitely are). This makes technical analysis of the domains where the effects from this kind of stuff is large quite difficult.
I think the inductive argument just isn’t that strong, when dealing with real agents. If, for whatever reason, you believe that your counterpart will respond in a tit-for-tat manner even in a finite-round PD, even if that’s not a Nash equilibrium strategy, your best response is not necessarily to defect. So CDT in a vacuum doesn’t prescribe always-defect, you need assumptions about the players’ beliefs, and I think the assumption of Nash equilibrium or common knowledge of backward induction + iterated deletion of dominated strategies is questionable.
Also, of course, CDT agents can use conditional commitment + coordination devices.
Agreed!
I think you can get cooperation on an iterated prisoners dilemma if there’s some probability p that you play another round, if p is high enough—you just can’t know at the outset exactly how many rounds there are going to be.
Yep, it’s definitely possible to get cooperation in a pure CDT-frame, but it IMO is also clearly silly how sensitive the cooperative equilibrium is to things like this (and also doesn’t track how I think basically any real-world decision-making happens).
I do think that iterated with some unknown number of iterations is better than either single round or n-rounds at approximating what real world situations look like (and gets the more realistic result that cooperation is possible).
I agree that people are mostly not writing out things out this way when they’re making real world decisions, but that applies equally to CDT and TDT, and being sensitive to small things like this seems like a fully general critique of game theory.
To be clear, uncertainty about the number of iterations isn’t enough. You need to have positive probability on arbitrarily high numbers of iterations, and never have it be the case that the probability of p(>n rounds) is so much less than p(n rounds) that it’s worth defecting on round n regardless of the effect of your reputation. These are pretty strong assumptions.
So cooperation is crucially dependent on your belief that all the way from 10 rounds to Graham’s number of rounds (and beyond), the probability of >n rounds conditional on n rounds is never lower than e.g. 20% (or whatever number is implied by the pay-off structure of your game).
Huh, I do think the “correct” game theory is not sensitive in these respects (indeed, all LDTs cooperate in a 1-shot mirrored prisoner’s dilemma). I agree that of course you want to be sensitive to some things, but the kind of sensitivity here seems silly.