“The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X”. Is it clear that an accurate X exists?
You mean, he is confronted with the statement that this GLUT predicts he will do X. That statement may or may not be true, depending on his behavior. He can choose a strategy of either always choosing to do what is predicted, always choosing to do the opposite of what is predicted, or ignoring the prediction and choosing based on unrelated criteria. It is possible to construct a lookup table containing accurate predictions of this sort only in the first and third cases, but not in the second.
Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can’t even define all of the influences.
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an ‘always do the opposite’ strategy.
Ie, “Choose either heads or tails. The oracle predicts you will choose .” If his strategy is ‘choose heads because I like heads’ then the oracle will correctly predict it. If his strategy is ‘do what the oracle says’, then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is ‘flip a coin and choose what it says’ then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.
However, if his strategy is ‘do the opposite’, then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It’s no different than an infinite loop in a computer program.
It’s not that the oracle is inaccurate. It’s that a recursive GLUT cannot be constructed for all possible agents.
You mean, he is confronted with the statement that this GLUT predicts he will do X. That statement may or may not be true, depending on his behavior. He can choose a strategy of either always choosing to do what is predicted, always choosing to do the opposite of what is predicted, or ignoring the prediction and choosing based on unrelated criteria. It is possible to construct a lookup table containing accurate predictions of this sort only in the first and third cases, but not in the second.
Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can’t even define all of the influences.
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an ‘always do the opposite’ strategy.
Ie, “Choose either heads or tails. The oracle predicts you will choose .” If his strategy is ‘choose heads because I like heads’ then the oracle will correctly predict it. If his strategy is ‘do what the oracle says’, then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is ‘flip a coin and choose what it says’ then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.
However, if his strategy is ‘do the opposite’, then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It’s no different than an infinite loop in a computer program.
It’s not that the oracle is inaccurate. It’s that a recursive GLUT cannot be constructed for all possible agents.