You’re confusing correlation with causation. Different players’ decision may be correlated, but they sure as hell aren’t causative of each other (unless they literally see each others’ code, maybe).
Causation isn’t necessary. You’re right that correlation isn’t quite sufficient, though!
What’s needed for rational cooperation in the prisoner’s dilemma is a two-way dependency between A and B’s decision-making. That can be because A is causally impacting B, or because B is causally impacting B; but it can also occur when there’s a common cause and neither is causing the other, like when my sister and I have similar genomes even though my sister didn’t create my genome and I didn’t create her genome. Or our decision-making processes can depend on each other because we inhabit the same laws of physics, or because we’re both bound by the same logical/mathematical laws—even if we’re on opposite sides of the universe.
(Dependence can also happen by coincidence, though if it’s completely random I’m not sure how’d you find out about it in order to act upon it!)
The most obvious example of cooperating due to acausal dependence is making two atom-by-atom-identical copies of an agent and put them in a one-shot prisoner’s dilemma against each other. But two agents whose decision-making is 90% similar instead of 100% identical can cooperate on those grounds too, provided the utility of mutual cooperation is sufficiently large.
For the same reason, a very large utility difference can rationally mandate cooperation even if cooperating only changes the probability of the other agent’s behavior from ’100% probability of defection’ to ‘99% probability of defection’.
Calling this source code sharing, instead of just “signaling for the purposes of a repeated game”, seems counter-productive.
I disagree! “Code-sharing” risks confusing someone into thinking there’s something magical and privileged about looking at source code. It’s true this is an unusually rich and direct source of information (assuming you understand the code’s implications and are sure what you’re seeing is the real deal), but the difference between that and inferring someone’s embarrassment from a blush is quantitative, not qualitative.
Some sources of information are more reliable and more revealing than others; but the same underlying idea is involved whenever something is evidence about an agent’s future decisions. See: Newcomblike Problems are the Norm
Yes, I agree that in a repeated game, the situation is trickier and involves a lot of signaling. The one-shot game is much easier: just always defect. By definition, that’s the best strategy.
If you and the other player have common knowledge that you reason the same way, then the correct move is to cooperate in the one-shot game. The correct move is to defect when those conditions don’t hold strongly enough, though.
The most obvious example of cooperating due to acausal dependence is making two atom-by-atom-identical copies of an agent and put them in a one-shot prisoner’s dilemma against each other. But two agents whose decision-making is 90% similar instead of 100% identical can cooperate on those grounds too, provided the utility of mutual cooperation is sufficiently large.
I’m not sure what “90% similar” means. Either I’m capable of making decisions independently from my opponent, or else I’m not. In real life, I am capable of doing so. The clone situation is strange, I admit, but in that case I’m not sure to what extent my “decision” even makes sense as a concept; I’ll clearly decide whatever my code says I’ll decide. As soon as you start assuming copies of my code being out there, I stop being comfortable with assigning me free will at all.
Anyway, none of this applies to real life, not even approximately. In real life, my decision cannot change your decision at all; in real life, nothing can even come close to predicting a decision I make in advance (assuming I put even a little bit of effort into that decision).
If you’re concerned about blushing etc., then you’re just saying the best strategy in a prisoner’s dilemma involves signaling very strongly that you’re trustworthy. I agree that this is correct against most human opponents. But surely you agree that if I can control my microexpressions, it’s best to signal “I will cooperate” while actually defecting, right?
Let me just ask you the following yes or no question: do you agree that my “always defect, but first pretend to be whatever will convince my opponent to cooperate” strategy beats all other strategies for a realistic one-shot prisoners’ dilemma? By one-shot, I mean that people will not have any memory of me defecting against them, so I can suffer no ill effects from retaliation.
Causation isn’t necessary. You’re right that correlation isn’t quite sufficient, though!
What’s needed for rational cooperation in the prisoner’s dilemma is a two-way dependency between A and B’s decision-making. That can be because A is causally impacting B, or because B is causally impacting B; but it can also occur when there’s a common cause and neither is causing the other, like when my sister and I have similar genomes even though my sister didn’t create my genome and I didn’t create her genome. Or our decision-making processes can depend on each other because we inhabit the same laws of physics, or because we’re both bound by the same logical/mathematical laws—even if we’re on opposite sides of the universe.
(Dependence can also happen by coincidence, though if it’s completely random I’m not sure how’d you find out about it in order to act upon it!)
The most obvious example of cooperating due to acausal dependence is making two atom-by-atom-identical copies of an agent and put them in a one-shot prisoner’s dilemma against each other. But two agents whose decision-making is 90% similar instead of 100% identical can cooperate on those grounds too, provided the utility of mutual cooperation is sufficiently large.
For the same reason, a very large utility difference can rationally mandate cooperation even if cooperating only changes the probability of the other agent’s behavior from ’100% probability of defection’ to ‘99% probability of defection’.
I disagree! “Code-sharing” risks confusing someone into thinking there’s something magical and privileged about looking at source code. It’s true this is an unusually rich and direct source of information (assuming you understand the code’s implications and are sure what you’re seeing is the real deal), but the difference between that and inferring someone’s embarrassment from a blush is quantitative, not qualitative.
Some sources of information are more reliable and more revealing than others; but the same underlying idea is involved whenever something is evidence about an agent’s future decisions. See: Newcomblike Problems are the Norm
If you and the other player have common knowledge that you reason the same way, then the correct move is to cooperate in the one-shot game. The correct move is to defect when those conditions don’t hold strongly enough, though.
I’m not sure what “90% similar” means. Either I’m capable of making decisions independently from my opponent, or else I’m not. In real life, I am capable of doing so. The clone situation is strange, I admit, but in that case I’m not sure to what extent my “decision” even makes sense as a concept; I’ll clearly decide whatever my code says I’ll decide. As soon as you start assuming copies of my code being out there, I stop being comfortable with assigning me free will at all.
Anyway, none of this applies to real life, not even approximately. In real life, my decision cannot change your decision at all; in real life, nothing can even come close to predicting a decision I make in advance (assuming I put even a little bit of effort into that decision).
If you’re concerned about blushing etc., then you’re just saying the best strategy in a prisoner’s dilemma involves signaling very strongly that you’re trustworthy. I agree that this is correct against most human opponents. But surely you agree that if I can control my microexpressions, it’s best to signal “I will cooperate” while actually defecting, right?
Let me just ask you the following yes or no question: do you agree that my “always defect, but first pretend to be whatever will convince my opponent to cooperate” strategy beats all other strategies for a realistic one-shot prisoners’ dilemma? By one-shot, I mean that people will not have any memory of me defecting against them, so I can suffer no ill effects from retaliation.