If you know the following fact: “The other guy will cooperate iff I cooperate”, even if you know nothing about the nature of the cause of the correlation, that’s still a good enough reason to cooperate.
You ask yourself “If I defect, what will the outcome be? If I cooperate, what will the outcome be?” Taking into account the correlation, one then determines which they prefer. And there you go.
For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like “hrmm… The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they’ll probably decide the same way. So the only reasonably possible outcomes would seem to be ‘both of us cooperate’ or ‘both of us defect’, therefore I choose the former, since it has a better outcome for me. Therefore I cooperate.”
In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent’s decision in the same way, then I may be able to infer “whatever sort of reasoning I’m doing, they’re also doing, so...”
Or did I completely fail to understand your objection?
If you know the following fact: “The other guy will cooperate iff I cooperate”, even if you know nothing about the nature of the cause of the correlation, that’s still a good enough reason to cooperate.
You ask yourself “If I defect, what will the outcome be? If I cooperate, what will the outcome be?” Taking into account the correlation, one then determines which they prefer. And there you go.
For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like “hrmm… The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they’ll probably decide the same way. So the only reasonably possible outcomes would seem to be ‘both of us cooperate’ or ‘both of us defect’, therefore I choose the former, since it has a better outcome for me. Therefore I cooperate.”
In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent’s decision in the same way, then I may be able to infer “whatever sort of reasoning I’m doing, they’re also doing, so...”
Or did I completely fail to understand your objection?