This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.
I think this reply is also illuminating: the stated goal in Newcomb’s problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.
I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
Uh, Omega has no business deciding what problem I’m solving.
Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says “you can’t deduce the day you’ll be hanged, even if you use this statement as an axiom too”. This phrase is self-referential, like the phrase “this statement is false”. Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner’s reasoning. This line of attack seems to have been first rigorously formalized by Fitch, “A Goedelized formulation of the prediction paradox”, can’t find the full text online. And that’s all there is to it.
I’m not solving it in the sense of utility maximization. I’m solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.
Maximising your financial return entails that you make omega’s prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.
Well, it had better not be predictable that you’re going to try that. I mean, at the point where Omega realizes, “Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box” It’s pretty much got you pegged.
I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.
My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what’s hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).
I think this reply is also illuminating: the stated goal in Newcomb’s problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.
I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
Uh, Omega has no business deciding what problem I’m solving.
The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says “you can’t deduce the day you’ll be hanged, even if you use this statement as an axiom too”. This phrase is self-referential, like the phrase “this statement is false”. Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner’s reasoning. This line of attack seems to have been first rigorously formalized by Fitch, “A Goedelized formulation of the prediction paradox”, can’t find the full text online. And that’s all there is to it.
No, but if you’re solving something other than Newcomb’s problem, why discuss it on this post?
I’m not solving it in the sense of utility maximization. I’m solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.
Okay yes, I see what you’re trying to do and the comment is retracted.
Maximising your financial return entails that you make omega’s prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.
Well, it had better not be predictable that you’re going to try that. I mean, at the point where Omega realizes, “Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box” It’s pretty much got you pegged.
That’s not so—the “elaborate clever strategy” does include a chance that you’ll one-box. What does the payoff matrix look like from Omega’s side?
I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.
My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what’s hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).