Paul, being fixed or not fixed has nothing to do with it. Suppose I program a deterministic AI to play the game (the AI picks a box.)
The deterministic AI knows that it is deterministic, and it knows that I know too, since I programmed it. So I also know whether it will take one or both boxes, and it knows that I know this.
At first, of course, it doesn’t know itself whether it will take one or both boxes, since it hasn’t completed running its code yet. So it says to itself, “Either I will take only one box or both boxes. If I take only one box, the programmer will have known this, so I will get 1,000,000. If I take both boxes, the programmer will have known this, so I will get 1,000. It is better to get 1,000,000 than 1,000. So I choose to take only one box.”
If someone tries to confuse the AI by saying, “if you take both, you can’t get less,” the AI will respond, “I can’t take both without different code, and if I had that code, the programmer would have known that and would have put less in the box, so I would get less.”
Or in other words: it is quite possible to make a decision, like the AI above, even if everything is fixed. For you do not yet know in what way everything is fixed, so you must make a choice, even though which one you will make is already determined. Or if you found out that your future is completely determined, would you go and jump off a cliff, since this could not happen unless it were inevitable anyway?
Paul, being fixed or not fixed has nothing to do with it. Suppose I program a deterministic AI to play the game (the AI picks a box.)
The deterministic AI knows that it is deterministic, and it knows that I know too, since I programmed it. So I also know whether it will take one or both boxes, and it knows that I know this.
At first, of course, it doesn’t know itself whether it will take one or both boxes, since it hasn’t completed running its code yet. So it says to itself, “Either I will take only one box or both boxes. If I take only one box, the programmer will have known this, so I will get 1,000,000. If I take both boxes, the programmer will have known this, so I will get 1,000. It is better to get 1,000,000 than 1,000. So I choose to take only one box.”
If someone tries to confuse the AI by saying, “if you take both, you can’t get less,” the AI will respond, “I can’t take both without different code, and if I had that code, the programmer would have known that and would have put less in the box, so I would get less.”
Or in other words: it is quite possible to make a decision, like the AI above, even if everything is fixed. For you do not yet know in what way everything is fixed, so you must make a choice, even though which one you will make is already determined. Or if you found out that your future is completely determined, would you go and jump off a cliff, since this could not happen unless it were inevitable anyway?