Omega can slightly change the problem (simulate an agent with the same decision algorithm as X but a different utility function)
This is irrelevant. The agent is actually outside, thinking what to do in the Newcomb’s problem. But only we know this, the agent itself doesn’t. All the agent knows is that Omega always predicts correctly. Which means, the agent can model Omega as a perfect simulator. The actual method that Omega uses to make predictions does not matter, the world would look the same to the agent, regardless.
Errrr. The agent does not simulate anything in my argument. The agent has a “mental model” of Omega, in which Omega is a perfect simulator. It’s about representation of the problem within the agent’s mind.
In your link, Omega—the function U() - is a perfect simulator. It calls the agent function A() twice, once to get its prediction, and once for the actual decision.
The problem would work as well if the first call went not to A directly but querying the oracle whether A()=1. There are ways of predicting that aren’t simulation, and if that’s the case then your idea falls apart.
I think you missed my point.
This is irrelevant. The agent is actually outside, thinking what to do in the Newcomb’s problem. But only we know this, the agent itself doesn’t. All the agent knows is that Omega always predicts correctly. Which means, the agent can model Omega as a perfect simulator. The actual method that Omega uses to make predictions does not matter, the world would look the same to the agent, regardless.
Unless Omega predicts without simulating- for instance, this formulation of UDT can be formally proved to one-box without simulating.
Errrr. The agent does not simulate anything in my argument. The agent has a “mental model” of Omega, in which Omega is a perfect simulator. It’s about representation of the problem within the agent’s mind.
In your link, Omega—the function U() - is a perfect simulator. It calls the agent function A() twice, once to get its prediction, and once for the actual decision.
The problem would work as well if the first call went not to A directly but querying the oracle whether A()=1. There are ways of predicting that aren’t simulation, and if that’s the case then your idea falls apart.