Flip a coin. If heads, both A & B. If tails, only A. (If the superintelligence can predict a coin flip, make it a radioactive decay or something. Eat quantum, Hal.)
In all seriousness, this is a very odd problem (I love it!). Of course two boxes is the rational solution—it’s not as if post-facto cogitation is going to change anything. But the problem statement seems to imply that it is actually impossible for me to choose the choice I don’t choose, i.e., choice is actually impossible.
Something is absurd here. I suspect it’s the idea that my choice is totally predictable. There can be a random element to my choice if I so choose, which kills Omega’s plan.
What wedrifid said. See also Rationality is Systematized Winning and the section of What Do We Mean By “Rationality”? about “Instrumental Rationality”, which is generally what we mean here when we talk about actions being rational or irrational. If you want to get more money, than the instrumentally rational action is the epistemically rational answer to the question “What course of action will cause me to get the most money?”.
If you accept the premises of Omega thought experiments, then the right answer is one-boxing, period. If you don’t accept the premises, it doesn’t make sense for you to be answering it one way or the other.
It is a common assumption in these sorts of problems that if Omega predicts that you will condition your choice on a quantum event, it will not put the money in Box B.
I suspect it’s the idea that my choice is totally predictable
At face, that does sound absurd. The problem is that you are underestimating a superintelligence. Imagine that the universe is a computer simulation, so that a set of physical laws plus a very, very long string of random numbers is a complete causal model of reality. The superintelligence knows the laws and all of the random numbers. You still make a choice, even though that choice ultimately depends on everything that preceded it. See http://wiki.lesswrong.com/wiki/Free_will
I think much of the debate about Newcomb’s Problem is about the definition of superintelligence.
My solution to the problem of the two boxes:
Flip a coin. If heads, both A & B. If tails, only A. (If the superintelligence can predict a coin flip, make it a radioactive decay or something. Eat quantum, Hal.)
In all seriousness, this is a very odd problem (I love it!). Of course two boxes is the rational solution—it’s not as if post-facto cogitation is going to change anything. But the problem statement seems to imply that it is actually impossible for me to choose the choice I don’t choose, i.e., choice is actually impossible.
Something is absurd here. I suspect it’s the idea that my choice is totally predictable. There can be a random element to my choice if I so choose, which kills Omega’s plan.
What wedrifid said. See also Rationality is Systematized Winning and the section of What Do We Mean By “Rationality”? about “Instrumental Rationality”, which is generally what we mean here when we talk about actions being rational or irrational. If you want to get more money, than the instrumentally rational action is the epistemically rational answer to the question “What course of action will cause me to get the most money?”.
If you accept the premises of Omega thought experiments, then the right answer is one-boxing, period. If you don’t accept the premises, it doesn’t make sense for you to be answering it one way or the other.
I thought about this last night and also came to the conclusion that randomizing my choice would not “assume the worst” as I ought to.
And I fully accept that this is just a thought experiment & physics is a cheap way out. I will now take the premises or leave them. :)
It is a common assumption in these sorts of problems that if Omega predicts that you will condition your choice on a quantum event, it will not put the money in Box B.
See The Least Convenient Possible World.
No it isn’t. If you like money it is rational to get more money. Take one box.
At face, that does sound absurd. The problem is that you are underestimating a superintelligence. Imagine that the universe is a computer simulation, so that a set of physical laws plus a very, very long string of random numbers is a complete causal model of reality. The superintelligence knows the laws and all of the random numbers. You still make a choice, even though that choice ultimately depends on everything that preceded it. See http://wiki.lesswrong.com/wiki/Free_will
I think much of the debate about Newcomb’s Problem is about the definition of superintelligence.