In this version of the game, instead of choosing their actions simultaneously, Alice moves first, and then Bob moves after he knows Alice move. However, Alice know Bob’s thought processes well enough that she can predict his move ahead of time.
This might lead to a contradiction: since Bob’s action depends on Alice’s action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob’s action, even if the dependence of Bob’s action on Alice’s action is simple, i.e. if Alice understands Bob’s algorithm very well.
To some extent yes. But it depends on what function Bob implements. If Bob always playes Defect, Alice has no way of making him playing Cooperate (likewise, if he always playes Cooperate, Alice can’t make him play Defect).
Yes, but didn’t we already establish that Bob would always defect, because he had nothing to gain in either case, in which case he will defect no matter what Alice chooses? Or is Bob also a TDT-style agent?
This might lead to a contradiction: since Bob’s action depends on Alice’s action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob’s action, even if the dependence of Bob’s action on Alice’s action is simple, i.e. if Alice understands Bob’s algorithm very well.
The scenarios which result in a contradiction are not compatible with the verbal description of the problem. As such we must conclude that the scenario is one of the ones which contains instances of the pair “Alice and Bob” for which it is possible for Alice to predict the moves of Bob.
If there was a problem that specified “Alice can predict Bob” and there are possible instances of those two where prediction is possible and an answer happened to conclude “it is impossible for Alice to predict Bob’s action” then the person giving the answer would just be wrong because they are responding to a problem incompatible with the specified problem.
This might lead to a contradiction: since Bob’s action depends on Alice’s action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob’s action, even if the dependence of Bob’s action on Alice’s action is simple, i.e. if Alice understands Bob’s algorithm very well.
Ok. Alice can predict Bob’s move given Alice’s move.
Ok, but if Alice can decide what to predict Bob’s move will be given her own move, that means Alice can control Bob’s move.
To some extent yes. But it depends on what function Bob implements. If Bob always playes Defect, Alice has no way of making him playing Cooperate (likewise, if he always playes Cooperate, Alice can’t make him play Defect).
Yes, but didn’t we already establish that Bob would always defect, because he had nothing to gain in either case, in which case he will defect no matter what Alice chooses? Or is Bob also a TDT-style agent?
Bob is ‘rational’. Interpret this according to the decision theory of your choice.
The scenarios which result in a contradiction are not compatible with the verbal description of the problem. As such we must conclude that the scenario is one of the ones which contains instances of the pair “Alice and Bob” for which it is possible for Alice to predict the moves of Bob.
If there was a problem that specified “Alice can predict Bob” and there are possible instances of those two where prediction is possible and an answer happened to conclude “it is impossible for Alice to predict Bob’s action” then the person giving the answer would just be wrong because they are responding to a problem incompatible with the specified problem.