I think there’s a big asymmetry between Omega and Nomega here, namely that Omega actually appears before you, while Nomega does not. This means there’s much better reason to think that Omega will actually reward you in an alternate universe than Nomega.
Put another way, the thing you could pre-commit to could be a broad policy of acausally cooperating with beings you have good reason to think exist, in your universe or a closely adjacent one(adjacent in the sense that your actions here actually have a chance of effecting things there) Once you learn that a being such as Omega exists, then you should act as though you had pre-committed to cooperating with them all along.
This means there’s much better reason to think that Omega will actually reward you in an alternate universe than Nomega.
That’s exactly what others are saying about priors. But really, it’s about your probabilities (including posteriors once someone appears). The “simple hack decision theory” works for all of these cases—multiply the conditional probability by the value of each possible outcome, and pick the condition that’s gives the largest utility-contribution.
If you assign a much lower probability to nomega than to omega, and assign a high probability of honesty to the setup, you want to pay. With other beliefs, you might not.
That’s exactly what others are saying about priors
It’s not the same thing. Other people are correctly pointing out that UDT’s behavior here depends on the prior. I’m arguing that a prior similar to the one we use in our day-to-day lives would assign greater probability to Omega than Nomega, given that one has seen Omega. The OP can be seen as implicitly about both issues.
If the answer is that you have a higher prior towards Omega before the mugging, then fine that solves the problem. But if you think Omega is more likley to exist only because you see Omega in front of you, then doesnt that violate UDTs principle of never updating?
Although UDT is formally updateless, the ‘mathematical intuition module’ which it uses to determine the effects of its actions can make it effectively act as though it’s updating.
Here’s a simple example. Say UDT’s prior over worlds is the following:
75% chance: you will see a green and red button, and a sign saying “press the red button for $5”
25% chance: same buttons, but the sign says “press the green button for $5”
Now, imagine the next thing UDT sees is the sign saying that it should press the green button. Of course, what it should do is press the green button(assuming the signs are truthful), even though in expectation the best thing to do would be pressing the red button. So why does it do this? UDT doesn’t update—it still considers the worlds where it sees the red button to be 3X more important—however, what does change is that, once it sees the green button sign, it no longer has any influence over the worlds where it sees the red button sign. Thus it acts as though it’s effectively updated on seeing the green button sign, even though its distribution over worlds remains unchanged.
By analogy, in your scenario, even though Omega and Nomega might be equally likely a priori, UDT’s influence over Omega’s actions is far greater given that it has actually seen Omega. Or to be more precise—in the situation where UDT has both seen Omega and the coin comes up heads, it has a lot of predictable influence over Omega’s behavior in a(equally valuable by its prior) world where Omega is real and the coin comes up tails. It has no such predictable influence over worlds where Nomega exists.
But UDT’s decision on how to interact woth Omega does direct affect worlds in which Nomega exists instead of Omega.
Again overly simplistic prior:
50% chance: Omega exists, and we get counterfactually mugged, half of the times heads and half of the times tails.
50% chance: Nomega exists, guesses what we would do if Omega existed and the coin came up tails, and pays out accordingly.
There is only one decision—do you pay if Omega exists and the coin comes up tails, and that decision affects both (or all three) possible worlds.
Even once you see that Omega exists, UDT already recognized that in order to maximize utility it should precommit (or just decide or whatever) to not pay.
UDT’s behavior here is totally determined by its prior. The question is which prior is more reasonable. ‘Closeness to Solomonoff induction’ is a good proxy for reasonableness here.
I think a prior putting greater weight on Omega, given that one has seen Omega, is much more reasonable. Here’s the reasoning. Let’s say that the description complexity of both Omega and Nomega is 1000 bits. Before UDT has seen either of them, it assigns a likelihood of 2−1000 to worlds where either of them exist. So it might seem that it should weight them equally, even having seen Omega.
However, the question then becomes—why is Nomega choosing to simulate the world containing Omega? Nomega could choose to simulate any world. In fact, a complete description of Nomega’s behavior must include a specification of which world it is simulating. This means that, while it takes 1000 bits to specify Nomega, specifying that Nomega exists and is simulating the world containing Omega actually takes 2000 bits.[1]
So UDT’s full prior ends up looking like:
999/1000: Normal world
2−1000: Omega exists
2−1000: Nomega exists
2−2000: Nomega exists and is simulating the world containing Omega
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
You might object that Nomega is defined by its property of messing with Omega, so it will naturally simulate worlds with Omega. In that case, it’s strictly more complex to specify than Omega, probably by several hundred bits due to the complexity of ‘messing with’
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)
I think there’s a big asymmetry between Omega and Nomega here, namely that Omega actually appears before you, while Nomega does not. This means there’s much better reason to think that Omega will actually reward you in an alternate universe than Nomega.
Put another way, the thing you could pre-commit to could be a broad policy of acausally cooperating with beings you have good reason to think exist, in your universe or a closely adjacent one(adjacent in the sense that your actions here actually have a chance of effecting things there) Once you learn that a being such as Omega exists, then you should act as though you had pre-committed to cooperating with them all along.
That’s exactly what others are saying about priors. But really, it’s about your probabilities (including posteriors once someone appears). The “simple hack decision theory” works for all of these cases—multiply the conditional probability by the value of each possible outcome, and pick the condition that’s gives the largest utility-contribution.
If you assign a much lower probability to nomega than to omega, and assign a high probability of honesty to the setup, you want to pay. With other beliefs, you might not.
It’s not the same thing. Other people are correctly pointing out that UDT’s behavior here depends on the prior. I’m arguing that a prior similar to the one we use in our day-to-day lives would assign greater probability to Omega than Nomega, given that one has seen Omega. The OP can be seen as implicitly about both issues.
If the answer is that you have a higher prior towards Omega before the mugging, then fine that solves the problem. But if you think Omega is more likley to exist only because you see Omega in front of you, then doesnt that violate UDTs principle of never updating?
Although UDT is formally updateless, the ‘mathematical intuition module’ which it uses to determine the effects of its actions can make it effectively act as though it’s updating.
Here’s a simple example. Say UDT’s prior over worlds is the following:
75% chance: you will see a green and red button, and a sign saying “press the red button for $5”
25% chance: same buttons, but the sign says “press the green button for $5”
Now, imagine the next thing UDT sees is the sign saying that it should press the green button. Of course, what it should do is press the green button(assuming the signs are truthful), even though in expectation the best thing to do would be pressing the red button. So why does it do this? UDT doesn’t update—it still considers the worlds where it sees the red button to be 3X more important—however, what does change is that, once it sees the green button sign, it no longer has any influence over the worlds where it sees the red button sign. Thus it acts as though it’s effectively updated on seeing the green button sign, even though its distribution over worlds remains unchanged.
By analogy, in your scenario, even though Omega and Nomega might be equally likely a priori, UDT’s influence over Omega’s actions is far greater given that it has actually seen Omega. Or to be more precise—in the situation where UDT has both seen Omega and the coin comes up heads, it has a lot of predictable influence over Omega’s behavior in a(equally valuable by its prior) world where Omega is real and the coin comes up tails. It has no such predictable influence over worlds where Nomega exists.
But UDT’s decision on how to interact woth Omega does direct affect worlds in which Nomega exists instead of Omega.
Again overly simplistic prior:
50% chance: Omega exists, and we get counterfactually mugged, half of the times heads and half of the times tails.
50% chance: Nomega exists, guesses what we would do if Omega existed and the coin came up tails, and pays out accordingly.
There is only one decision—do you pay if Omega exists and the coin comes up tails, and that decision affects both (or all three) possible worlds.
Even once you see that Omega exists, UDT already recognized that in order to maximize utility it should precommit (or just decide or whatever) to not pay.
UDT’s behavior here is totally determined by its prior. The question is which prior is more reasonable. ‘Closeness to Solomonoff induction’ is a good proxy for reasonableness here.
I think a prior putting greater weight on Omega, given that one has seen Omega, is much more reasonable. Here’s the reasoning. Let’s say that the description complexity of both Omega and Nomega is 1000 bits. Before UDT has seen either of them, it assigns a likelihood of 2−1000 to worlds where either of them exist. So it might seem that it should weight them equally, even having seen Omega.
However, the question then becomes—why is Nomega choosing to simulate the world containing Omega? Nomega could choose to simulate any world. In fact, a complete description of Nomega’s behavior must include a specification of which world it is simulating. This means that, while it takes 1000 bits to specify Nomega, specifying that Nomega exists and is simulating the world containing Omega actually takes 2000 bits.[1]
So UDT’s full prior ends up looking like:
999/1000: Normal world
2−1000: Omega exists
2−1000: Nomega exists
2−2000: Nomega exists and is simulating the world containing Omega
Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it’s in the Omega world.
You might object that Nomega is defined by its property of messing with Omega, so it will naturally simulate worlds with Omega. In that case, it’s strictly more complex to specify than Omega, probably by several hundred bits due to the complexity of ‘messing with’
I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it, in the same way that you can predict the output of many computer programs without simulating them.
By the time you get mugged, you could be 100% sure that you are in the Omega world, rather than the Nomega world, but the principle is that your decision in the Omega world affects the Nomega world, and so before knowing UDT commits to making the decision that maximizing EV across both worlds.
This logic operates in the same way for the coin coming up tails—when you see the tails, you know your in the tails world, but your decision in the tails world affects the heads world, so you have to consider it. Likewise, your decision in the Omega world affects the Nomega world (independent of any sort of simulation argument).
This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.
By ‘simulating’ I just mean that it’s reasoning in some way about your behavior in another universe, it doesn’t have to be a literal simulation. But the point remains—of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you’ll act.
What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have 2−100 less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.
(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don’t have very high measure)
What I meant by “Nomega world” in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the “Omega”/”Nomega simulating omega” world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the “Nomega simulates omega” world has much less measure than the Omega world(under a ‘reasonable’ measure such as Solomonoff)