To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.
I believe this can be made consistent. Your first friend will predict that you will ask your second friend. Your second friend will predict that you will do the opposite of whatever they say, and so won’t be able to predict anything. If you ever choose, you’ll have to fall back on something consistent, which your first friend will consequently predict.
If you force 2F to make some arbitrary prediction, though, then if 1F can predict 2F’s prediction, 1F will predict you’ll do the opposite. If 1F can’t do that, he’ll do whatever he would do if you used a quantum randomizer (I believe this is usually said to be not putting anything in the box).
You have escalated the mystical power of Omega—surely it’s no longer just a human friend who knows you well—supporting my point about the quoted passage. If your new Omegas aren’t yet running full simulations (a case resolved by indexical uncertainty) but rather some kind of coarse-grained approximations, then I should have enough sub-pixel and off-scene freedom to condition my action on 2F’s response with neither 1F nor 2F knowing it. If you have some other mechanism of how Omega might work, please elaborate: I need to understand an Omega to screw it up.
To determine exactly how to screw with your Omega, I need to understand what it does. If it’s running something less than a full simulation, something coarse-grained, I can exploit it: condition on a sub-pixel or off-scene detail. (The full simulation scenario is solved by indexical uncertainty.) In the epic thread no one has yet produced a demystified Omega that can’t be screwed with. Taboo “predict” and explain.
I believe this can be made consistent. Your first friend will predict that you will ask your second friend. Your second friend will predict that you will do the opposite of whatever they say, and so won’t be able to predict anything. If you ever choose, you’ll have to fall back on something consistent, which your first friend will consequently predict.
If you force 2F to make some arbitrary prediction, though, then if 1F can predict 2F’s prediction, 1F will predict you’ll do the opposite. If 1F can’t do that, he’ll do whatever he would do if you used a quantum randomizer (I believe this is usually said to be not putting anything in the box).
You have escalated the mystical power of Omega—surely it’s no longer just a human friend who knows you well—supporting my point about the quoted passage. If your new Omegas aren’t yet running full simulations (a case resolved by indexical uncertainty) but rather some kind of coarse-grained approximations, then I should have enough sub-pixel and off-scene freedom to condition my action on 2F’s response with neither 1F nor 2F knowing it. If you have some other mechanism of how Omega might work, please elaborate: I need to understand an Omega to screw it up.
To determine exactly how to screw with your Omega, I need to understand what it does. If it’s running something less than a full simulation, something coarse-grained, I can exploit it: condition on a sub-pixel or off-scene detail. (The full simulation scenario is solved by indexical uncertainty.) In the epic thread no one has yet produced a demystified Omega that can’t be screwed with. Taboo “predict” and explain.