That one’s simple: prohibit indexical uncertainty. I must be able to assume that I am in the real world, not inside Omega. So should my scanner’s internal computation—if I anticipate it will be run inside Omega, I will change it accordingly.
Edit: sorry, now I see why exactly you’re asked. No, I have no proof that my list of Omega types is exhaustive. There could be a middle ground between types 2 and 3: an Omega that doesn’t simulate you, but still somehow prohibits you from using another Omega to cheat. But, as orthonormal’s examples show, such a machine doesn’t readily spring to mind.
Indexical uncertainty is a property of you, not Omega.
Saying Omega cannot create a situation in which you have indexical uncertainty is too vague. What process of cognition is prohibited to Omega that prevents producing indexical uncertainty, but still allows for making calibrated, discriminating predictions?
You’re digging deep. I already admitted that my list of Omegas isn’t proven to be exhaustive and probably can never be, given how crazy the individual cases sound. The thing I call a type 3 Omega should better be called a Terminating Omega, a device that outputs one bit in bounded time given any input situation. If Omega is non-terminating—e.g. it throws me out of the game on predicting certain behavior, or hangs forever on some inputs—of course such an Omega doesn’t necessarily have to be a simulation. But then you need a halfway credible account of what it does, because otherwise the problem is unformulated and incomplete.
The process you’ve described (Omega realizes this, then realizes that...) sounded like a simulation—that’s why I referred you to case 2. Of course you might have meant something I hadn’t anticipated.
Part of my motivation for digging deep on this issue is that, although I did not intend for my description of Omega and the detector reasoning about each other to be based on a simulation, I could see after you brought it up that it might be interpreted that way. I thought if I knew on a more detailed level what we mean by “simulation”, I would be able to tell if I had implicitly assumed that Omega was using one. However, any strategy I come up with for making predictions seems like something I could consider a simulation, though it might lack detail, and through omitting important details, be inaccurate. Even just guessing could be considered a very undetailed, very inaccurate simulation.
I would like a definition of simulation that doesn’t lead to this conclusion, but in case there isn’t one, suppose the restriction against simulation really means that Omega does not use a perfect simulation, and you have a chance to resolve the indexical uncertainty.
I can imagine situations in which an incomplete, though still highly accurate, simulation provides information to the simulated subject to resolve the indexical uncertainty, but this information is difficult or even impossible to interpret.
For example, suppose Omega does use a perfect simulation, except that he flips a coin. In the real world, Omega shows you the true result of the coin toss, but he simulates your response as if he shows you the opposite result. Now you still don’t know if you are in a simulation or reality, but you are no longer guaranteed by determinism to make the same decision in each case. You could one box if you see heads and two box if you see tails. If you did this, you have a 50% probability that the true flip was heads, so you gain nothing, and a 50% probability that the true flip was tails and you gain $1,001,000, for an expected gain of $500,500. This is not as good as if you just one box either way and gain $1,000,000. If Omega instead flips a biased coin that shows tails 60% of the time, and tells you this, then the same strategy has an expected gain of $600,600, still not as good as complete one-boxing. But if the coin was biased to show tails 1000 times out of 1001, then the strategy expects to equal one-boxing, and it will do better for a more extreme bias.
So, if you suppose that Omega uses an imperfect simulation (without the coin), you can gather evidence about if you are in reality or the simulation. You would need to achieve a probability of greater than 1000/1001 that you are in reality before it is a good strategy to two box. I would be impressed with a strategy that could accomplish that.
As for terminating, if Omega detects a paradox, Omega puts money in box 1 with 50% probability. It is not a winning strategy to force this outcome.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Then what part of the world is assumed to be “random” and what part is evaluated exactly?
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
In good implementations we can’t cheat, in bad ones we can
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.
That one’s simple: prohibit indexical uncertainty. I must be able to assume that I am in the real world, not inside Omega. So should my scanner’s internal computation—if I anticipate it will be run inside Omega, I will change it accordingly.
Edit: sorry, now I see why exactly you’re asked. No, I have no proof that my list of Omega types is exhaustive. There could be a middle ground between types 2 and 3: an Omega that doesn’t simulate you, but still somehow prohibits you from using another Omega to cheat. But, as orthonormal’s examples show, such a machine doesn’t readily spring to mind.
Indexical uncertainty is a property of you, not Omega.
Saying Omega cannot create a situation in which you have indexical uncertainty is too vague. What process of cognition is prohibited to Omega that prevents producing indexical uncertainty, but still allows for making calibrated, discriminating predictions?
You’re digging deep. I already admitted that my list of Omegas isn’t proven to be exhaustive and probably can never be, given how crazy the individual cases sound. The thing I call a type 3 Omega should better be called a Terminating Omega, a device that outputs one bit in bounded time given any input situation. If Omega is non-terminating—e.g. it throws me out of the game on predicting certain behavior, or hangs forever on some inputs—of course such an Omega doesn’t necessarily have to be a simulation. But then you need a halfway credible account of what it does, because otherwise the problem is unformulated and incomplete.
The process you’ve described (Omega realizes this, then realizes that...) sounded like a simulation—that’s why I referred you to case 2. Of course you might have meant something I hadn’t anticipated.
Part of my motivation for digging deep on this issue is that, although I did not intend for my description of Omega and the detector reasoning about each other to be based on a simulation, I could see after you brought it up that it might be interpreted that way. I thought if I knew on a more detailed level what we mean by “simulation”, I would be able to tell if I had implicitly assumed that Omega was using one. However, any strategy I come up with for making predictions seems like something I could consider a simulation, though it might lack detail, and through omitting important details, be inaccurate. Even just guessing could be considered a very undetailed, very inaccurate simulation.
I would like a definition of simulation that doesn’t lead to this conclusion, but in case there isn’t one, suppose the restriction against simulation really means that Omega does not use a perfect simulation, and you have a chance to resolve the indexical uncertainty.
I can imagine situations in which an incomplete, though still highly accurate, simulation provides information to the simulated subject to resolve the indexical uncertainty, but this information is difficult or even impossible to interpret.
For example, suppose Omega does use a perfect simulation, except that he flips a coin. In the real world, Omega shows you the true result of the coin toss, but he simulates your response as if he shows you the opposite result. Now you still don’t know if you are in a simulation or reality, but you are no longer guaranteed by determinism to make the same decision in each case. You could one box if you see heads and two box if you see tails. If you did this, you have a 50% probability that the true flip was heads, so you gain nothing, and a 50% probability that the true flip was tails and you gain $1,001,000, for an expected gain of $500,500. This is not as good as if you just one box either way and gain $1,000,000. If Omega instead flips a biased coin that shows tails 60% of the time, and tells you this, then the same strategy has an expected gain of $600,600, still not as good as complete one-boxing. But if the coin was biased to show tails 1000 times out of 1001, then the strategy expects to equal one-boxing, and it will do better for a more extreme bias.
So, if you suppose that Omega uses an imperfect simulation (without the coin), you can gather evidence about if you are in reality or the simulation. You would need to achieve a probability of greater than 1000/1001 that you are in reality before it is a good strategy to two box. I would be impressed with a strategy that could accomplish that.
As for terminating, if Omega detects a paradox, Omega puts money in box 1 with 50% probability. It is not a winning strategy to force this outcome.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.