This might lead to a contradiction: since Bob’s action depends on Alice’s action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob’s action, even if the dependence of Bob’s action on Alice’s action is simple, i.e. if Alice understands Bob’s algorithm very well.
The scenarios which result in a contradiction are not compatible with the verbal description of the problem. As such we must conclude that the scenario is one of the ones which contains instances of the pair “Alice and Bob” for which it is possible for Alice to predict the moves of Bob.
If there was a problem that specified “Alice can predict Bob” and there are possible instances of those two where prediction is possible and an answer happened to conclude “it is impossible for Alice to predict Bob’s action” then the person giving the answer would just be wrong because they are responding to a problem incompatible with the specified problem.
The scenarios which result in a contradiction are not compatible with the verbal description of the problem. As such we must conclude that the scenario is one of the ones which contains instances of the pair “Alice and Bob” for which it is possible for Alice to predict the moves of Bob.
If there was a problem that specified “Alice can predict Bob” and there are possible instances of those two where prediction is possible and an answer happened to conclude “it is impossible for Alice to predict Bob’s action” then the person giving the answer would just be wrong because they are responding to a problem incompatible with the specified problem.