I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B.
It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.
I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B. It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.