Condition 1 seems quite hard to achieve, especially in civilizations like ours that don’t have sophisticated technology for making predictions. So I expect reciprocity on its own to not be much of a motive in the true prisoner’s dilemma unless the players have an extremely good ability to reason about one another—it doesn’t require going all the way to literal simulations, but does require much more ability than we have today.
I think this is too weak. I think even without precommitments I think I am pretty good at predicting who of my friends will defect/cooperate in prisoner’s dilemma-like scenarios, and I think you are underestimating the ability for humans to predict one another.
I at least make a lot of decisions per week that I think are in-structure pretty similar to a prisoner’s dilemma where I expect the correlation part to check out, because I am pretty good at predicting what other people are doing (in general my experience is that people are pretty honest about how they reason in situations like this, so you can just ask them, and then predicting that they will do what the algorithm they described to you at some point in the past would do gets you pretty high accuracy).
I’m definitely more skeptical of this mechanism than you are (at least on its own, i.e. absent either correlation+kindness or the usual causal+moral reasons to cooperate). I don’t think it’s obvious.
The relevant parameter is something like R = E[my opponent’s probability that I’ll cooperate | I cooperate] vs E[my opponent’s probability that I’ll cooperate | I defect]. My feeling is that this parameter can’t be that close to 1 because there is a bunch of other stuff that affects my opponent’s probability (like my past behavior, their experience with agents similar to me, etc.) beyond my actual decision about which I’m currently uncertain.
For an EDT agent I think that’s pretty clear that R<<1. For a UDT agent I think it’s less clear since everything is so confusing and R > 0.5 is not crazy.
I think this is too weak. I think even without precommitments I think I am pretty good at predicting who of my friends will defect/cooperate in prisoner’s dilemma-like scenarios, and I think you are underestimating the ability for humans to predict one another.
I at least make a lot of decisions per week that I think are in-structure pretty similar to a prisoner’s dilemma where I expect the correlation part to check out, because I am pretty good at predicting what other people are doing (in general my experience is that people are pretty honest about how they reason in situations like this, so you can just ask them, and then predicting that they will do what the algorithm they described to you at some point in the past would do gets you pretty high accuracy).
I’m definitely more skeptical of this mechanism than you are (at least on its own, i.e. absent either correlation+kindness or the usual causal+moral reasons to cooperate). I don’t think it’s obvious.
The relevant parameter is something like R = E[my opponent’s probability that I’ll cooperate | I cooperate] vs E[my opponent’s probability that I’ll cooperate | I defect]. My feeling is that this parameter can’t be that close to 1 because there is a bunch of other stuff that affects my opponent’s probability (like my past behavior, their experience with agents similar to me, etc.) beyond my actual decision about which I’m currently uncertain.
For an EDT agent I think that’s pretty clear that R<<1. For a UDT agent I think it’s less clear since everything is so confusing and R > 0.5 is not crazy.