It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Then what part of the world is assumed to be “random” and what part is evaluated exactly?
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
In good implementations we can’t cheat, in bad ones we can
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.