Newcomb WOULD in fact take forever, if both omega and agent were perfect calculators of each other and contained the full source to each other (not the halting problem, but related). Omega couldn’t make the offer until it had simulated the agent, including the simulated agent fully simulating Omega, recursively to infinity.
But most formulations don’t require this fidelity. Even if Omega is quite inaccurate, as long as it’s better at predicting the agent than the agent is of Omega, the problem still works.
It’s possible for two programs to know each other’s code and to perfectly deduce each other’s result without taking forever, they just can’t do it by simulating each other. But they can do it by formal reasoning about each other, if it happens to be sufficiently easy and neither is preventing the other from predicting it. The issues here are not about fidelity of prediction.
Newcomb WOULD in fact take forever, if both omega and agent were perfect calculators of each other and contained the full source to each other (not the halting problem, but related). Omega couldn’t make the offer until it had simulated the agent, including the simulated agent fully simulating Omega, recursively to infinity.
But most formulations don’t require this fidelity. Even if Omega is quite inaccurate, as long as it’s better at predicting the agent than the agent is of Omega, the problem still works.
It’s possible for two programs to know each other’s code and to perfectly deduce each other’s result without taking forever, they just can’t do it by simulating each other. But they can do it by formal reasoning about each other, if it happens to be sufficiently easy and neither is preventing the other from predicting it. The issues here are not about fidelity of prediction.