Or, more generally: “If, for whatever reason, there’s sufficiently strong correlation between my cooperation and my opponent’s cooperation, then cooperation is the correct answer”
You need causation, not correlation. Correlation considers the whole state space, whereas you need to look at correlation within each conditional area of state space, given one action (your cooperation), or another (your defection), which in this case corresponds to causation. If you only look for unconditional correlation, you are inadvertently asking the same circular question: “what will I do?”. When you act, you determine which parts of the state space are to be annihilated, become not just counterfactual, but impossible, and this is what you can (ever) do. Correlation depends on that, since it’s computed over what remains. So you can’t search for that information, and use it as a basis for your decisions.
If you know the following fact: “The other guy will cooperate iff I cooperate”, even if you know nothing about the nature of the cause of the correlation, that’s still a good enough reason to cooperate.
You ask yourself “If I defect, what will the outcome be? If I cooperate, what will the outcome be?” Taking into account the correlation, one then determines which they prefer. And there you go.
For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like “hrmm… The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they’ll probably decide the same way. So the only reasonably possible outcomes would seem to be ‘both of us cooperate’ or ‘both of us defect’, therefore I choose the former, since it has a better outcome for me. Therefore I cooperate.”
In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent’s decision in the same way, then I may be able to infer “whatever sort of reasoning I’m doing, they’re also doing, so...”
Or did I completely fail to understand your objection?
Or, more generally: “If, for whatever reason, there’s sufficiently strong correlation between my cooperation and my opponent’s cooperation, then cooperation is the correct answer”
You need causation, not correlation. Correlation considers the whole state space, whereas you need to look at correlation within each conditional area of state space, given one action (your cooperation), or another (your defection), which in this case corresponds to causation. If you only look for unconditional correlation, you are inadvertently asking the same circular question: “what will I do?”. When you act, you determine which parts of the state space are to be annihilated, become not just counterfactual, but impossible, and this is what you can (ever) do. Correlation depends on that, since it’s computed over what remains. So you can’t search for that information, and use it as a basis for your decisions.
If you know the following fact: “The other guy will cooperate iff I cooperate”, even if you know nothing about the nature of the cause of the correlation, that’s still a good enough reason to cooperate.
You ask yourself “If I defect, what will the outcome be? If I cooperate, what will the outcome be?” Taking into account the correlation, one then determines which they prefer. And there you go.
For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like “hrmm… The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they’ll probably decide the same way. So the only reasonably possible outcomes would seem to be ‘both of us cooperate’ or ‘both of us defect’, therefore I choose the former, since it has a better outcome for me. Therefore I cooperate.”
In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent’s decision in the same way, then I may be able to infer “whatever sort of reasoning I’m doing, they’re also doing, so...”
Or did I completely fail to understand your objection?