The most obvious example of cooperating due to acausal dependence is making two atom-by-atom-identical copies of an agent and put them in a one-shot prisoner’s dilemma against each other. But two agents whose decision-making is 90% similar instead of 100% identical can cooperate on those grounds too, provided the utility of mutual cooperation is sufficiently large.
I’m not sure what “90% similar” means. Either I’m capable of making decisions independently from my opponent, or else I’m not. In real life, I am capable of doing so. The clone situation is strange, I admit, but in that case I’m not sure to what extent my “decision” even makes sense as a concept; I’ll clearly decide whatever my code says I’ll decide. As soon as you start assuming copies of my code being out there, I stop being comfortable with assigning me free will at all.
Anyway, none of this applies to real life, not even approximately. In real life, my decision cannot change your decision at all; in real life, nothing can even come close to predicting a decision I make in advance (assuming I put even a little bit of effort into that decision).
If you’re concerned about blushing etc., then you’re just saying the best strategy in a prisoner’s dilemma involves signaling very strongly that you’re trustworthy. I agree that this is correct against most human opponents. But surely you agree that if I can control my microexpressions, it’s best to signal “I will cooperate” while actually defecting, right?
Let me just ask you the following yes or no question: do you agree that my “always defect, but first pretend to be whatever will convince my opponent to cooperate” strategy beats all other strategies for a realistic one-shot prisoners’ dilemma? By one-shot, I mean that people will not have any memory of me defecting against them, so I can suffer no ill effects from retaliation.
I’m not sure what “90% similar” means. Either I’m capable of making decisions independently from my opponent, or else I’m not. In real life, I am capable of doing so. The clone situation is strange, I admit, but in that case I’m not sure to what extent my “decision” even makes sense as a concept; I’ll clearly decide whatever my code says I’ll decide. As soon as you start assuming copies of my code being out there, I stop being comfortable with assigning me free will at all.
Anyway, none of this applies to real life, not even approximately. In real life, my decision cannot change your decision at all; in real life, nothing can even come close to predicting a decision I make in advance (assuming I put even a little bit of effort into that decision).
If you’re concerned about blushing etc., then you’re just saying the best strategy in a prisoner’s dilemma involves signaling very strongly that you’re trustworthy. I agree that this is correct against most human opponents. But surely you agree that if I can control my microexpressions, it’s best to signal “I will cooperate” while actually defecting, right?
Let me just ask you the following yes or no question: do you agree that my “always defect, but first pretend to be whatever will convince my opponent to cooperate” strategy beats all other strategies for a realistic one-shot prisoners’ dilemma? By one-shot, I mean that people will not have any memory of me defecting against them, so I can suffer no ill effects from retaliation.