I’m definitely more skeptical of this mechanism than you are (at least on its own, i.e. absent either correlation+kindness or the usual causal+moral reasons to cooperate). I don’t think it’s obvious.
The relevant parameter is something like R = E[my opponent’s probability that I’ll cooperate | I cooperate] vs E[my opponent’s probability that I’ll cooperate | I defect]. My feeling is that this parameter can’t be that close to 1 because there is a bunch of other stuff that affects my opponent’s probability (like my past behavior, their experience with agents similar to me, etc.) beyond my actual decision about which I’m currently uncertain.
For an EDT agent I think that’s pretty clear that R<<1. For a UDT agent I think it’s less clear since everything is so confusing and R > 0.5 is not crazy.
I’m definitely more skeptical of this mechanism than you are (at least on its own, i.e. absent either correlation+kindness or the usual causal+moral reasons to cooperate). I don’t think it’s obvious.
The relevant parameter is something like R = E[my opponent’s probability that I’ll cooperate | I cooperate] vs E[my opponent’s probability that I’ll cooperate | I defect]. My feeling is that this parameter can’t be that close to 1 because there is a bunch of other stuff that affects my opponent’s probability (like my past behavior, their experience with agents similar to me, etc.) beyond my actual decision about which I’m currently uncertain.
For an EDT agent I think that’s pretty clear that R<<1. For a UDT agent I think it’s less clear since everything is so confusing and R > 0.5 is not crazy.