“You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you.”
If I know enough to see that my decision doesn’t affect the alien’s, I defect. If I don’t know enough, I consider that the alien might know what my own algorithm is. Therefore I decide to cooperate if I think the alien will cooperate. I assume the alien knows this and that he knows that I know. Therefore I assume that the alien will cooperate because he thinks this will cause me to cooperate based on his knowledge of my thought processes (and CC is preferable to DD). Following the algorithm laid out above, I cooperate.
This is still just superrationality, though a little more advanced than usual. I have incomplete knowledge about my opponent’s thought processes, I assume the rest will be similar to mine, consequently I choose the optimum symmetric strategy and hope he does the same.
“You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you.”
If I know enough to see that my decision doesn’t affect the alien’s, I defect. If I don’t know enough, I consider that the alien might know what my own algorithm is. Therefore I decide to cooperate if I think the alien will cooperate. I assume the alien knows this and that he knows that I know. Therefore I assume that the alien will cooperate because he thinks this will cause me to cooperate based on his knowledge of my thought processes (and CC is preferable to DD). Following the algorithm laid out above, I cooperate.
This is still just superrationality, though a little more advanced than usual. I have incomplete knowledge about my opponent’s thought processes, I assume the rest will be similar to mine, consequently I choose the optimum symmetric strategy and hope he does the same.