I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)