Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can’t even define all of the influences.
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an ‘always do the opposite’ strategy.
Ie, “Choose either heads or tails. The oracle predicts you will choose .” If his strategy is ‘choose heads because I like heads’ then the oracle will correctly predict it. If his strategy is ‘do what the oracle says’, then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is ‘flip a coin and choose what it says’ then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.
However, if his strategy is ‘do the opposite’, then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It’s no different than an infinite loop in a computer program.
It’s not that the oracle is inaccurate. It’s that a recursive GLUT cannot be constructed for all possible agents.
Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can’t even define all of the influences.
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an ‘always do the opposite’ strategy.
Ie, “Choose either heads or tails. The oracle predicts you will choose .” If his strategy is ‘choose heads because I like heads’ then the oracle will correctly predict it. If his strategy is ‘do what the oracle says’, then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is ‘flip a coin and choose what it says’ then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.
However, if his strategy is ‘do the opposite’, then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It’s no different than an infinite loop in a computer program.
It’s not that the oracle is inaccurate. It’s that a recursive GLUT cannot be constructed for all possible agents.