Omega could tell you “Either I am simulating you to gauge your response, or this is reality and I predicted your response”—and the problem would be essentially the same.
This is essentially the same only if you care only about reality. But if you care about outcomes in simulations, too, then this is not “essentially the same” as the regular formulation of the problem.
If I care about my outcomes when I am “just a simulation” in a similar way to when I am “in reality”, then the phrasing you’ve used for Omega would not lead to the standard Newcomb problem. If I’m understanding this correctly, your reformulation of what Omega says will result in justified two-boxing with CDT.
Either I’m a simulation, or I’m not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I’m much more likely to be a simulation. Since we’re in a simulation, Omega has not yet predicted our response. Therefore two-boxing really is genuinely better than one-boxing.
In other words, while Newcomb’s problem is usually an illustration for why CDT fails by saying we should two-box, under your reformulation, CDT correctly says we should two-box. (Under the assumption that we value simulated utilons as we do “real” ones.)
That depends on what you care about. If you only care about what the non-simulated you gets, than one boxing is still better. And I don’t see any reason why a simulated you should care, because they won’t actually be around to get the utility, as presumably Omega ends the simulation after they give their response.
Either I’m a simulation, or I’m not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I’m much more likely to be a simulation.
If you assume the standard implicit condition of a perfectly deterministic universe where omega does predict with 100% accuracy every single player, then Omega does not need to simulate you more than once. Omega instead needs perfect information on your full state before the decision and any parameters that might influence the decision (along with, of course, incredible computing power).
We can simplify this consideration away by stipulating that the simulated agent doesn’t actually get any money, so the consequences of each choice is the same for the simulated agent.
This is essentially the same only if you care only about reality. But if you care about outcomes in simulations, too, then this is not “essentially the same” as the regular formulation of the problem.
If I care about my outcomes when I am “just a simulation” in a similar way to when I am “in reality”, then the phrasing you’ve used for Omega would not lead to the standard Newcomb problem. If I’m understanding this correctly, your reformulation of what Omega says will result in justified two-boxing with CDT.
Either I’m a simulation, or I’m not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I’m much more likely to be a simulation. Since we’re in a simulation, Omega has not yet predicted our response. Therefore two-boxing really is genuinely better than one-boxing.
In other words, while Newcomb’s problem is usually an illustration for why CDT fails by saying we should two-box, under your reformulation, CDT correctly says we should two-box. (Under the assumption that we value simulated utilons as we do “real” ones.)
That depends on what you care about. If you only care about what the non-simulated you gets, than one boxing is still better. And I don’t see any reason why a simulated you should care, because they won’t actually be around to get the utility, as presumably Omega ends the simulation after they give their response.
If you assume the standard implicit condition of a perfectly deterministic universe where omega does predict with 100% accuracy every single player, then Omega does not need to simulate you more than once. Omega instead needs perfect information on your full state before the decision and any parameters that might influence the decision (along with, of course, incredible computing power).
We can simplify this consideration away by stipulating that the simulated agent doesn’t actually get any money, so the consequences of each choice is the same for the simulated agent.