UDT’s behavior here is totally determined by its prior. The question is which prior is more reasonable. ‘Closeness to Solomonoff induction’ is a good proxy for reasonableness here.
I think a prior putting greater weight on Omega, given that one has seen Omega, is much more reasonable. Here’s the reasoning. Let’s say that the description complexity of both Omega and Nomega is 1000 bits. Before UDT has seen either of them, it assigns a likelihood of 2−1000 to worlds where either of them exist. So it might seem that it should weight them equally, even having seen Omega.
However, the question then becomes—why is Nomega choosing to simulate the world containing Omega? Nomega could choose to simulate any world. In fact, a complete description of Nomega’s behavior must include a specification of which world it is simulating. This means that, while it takes 1000 bits to specify Nomega, specifying that Nomega exists and is simulating the world containing Omega actually takes 2000 bits.[1]
So UDT’s full prior ends up looking like:
999/1000: Normal world
2−1000: Omega exists
2−1000: Nomega exists
2−2000: Nomega exists and is simulating the world containing Omega
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
You might object that Nomega is defined by its property of messing with Omega, so it will naturally simulate worlds with Omega. In that case, it’s strictly more complex to specify than Omega, probably by several hundred bits due to the complexity of ‘messing with’
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)
UDT’s behavior here is totally determined by its prior. The question is which prior is more reasonable. ‘Closeness to Solomonoff induction’ is a good proxy for reasonableness here.
I think a prior putting greater weight on Omega, given that one has seen Omega, is much more reasonable. Here’s the reasoning. Let’s say that the description complexity of both Omega and Nomega is 1000 bits. Before UDT has seen either of them, it assigns a likelihood of 2−1000 to worlds where either of them exist. So it might seem that it should weight them equally, even having seen Omega.
However, the question then becomes—why is Nomega choosing to simulate the world containing Omega? Nomega could choose to simulate any world. In fact, a complete description of Nomega’s behavior must include a specification of which world it is simulating. This means that, while it takes 1000 bits to specify Nomega, specifying that Nomega exists and is simulating the world containing Omega actually takes 2000 bits.[1]
So UDT’s full prior ends up looking like:
999/1000: Normal world
2−1000: Omega exists
2−1000: Nomega exists
2−2000: Nomega exists and is simulating the world containing Omega
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
You might object that Nomega is defined by its property of messing with Omega, so it will naturally simulate worlds with Omega. In that case, it’s strictly more complex to specify than Omega, probably by several hundred bits due to the complexity of ‘messing with’
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)