I’m a convinced two-boxer, but I’ll try to put my argument without any bias. It seems to me the way this problem has been put has been an attempt to rig it for the one boxers. When we talk about “precommitment” it is suggested the subject has an advance knowledge of Omega and what is to happen. The way I thought the paradox worked, was that Omega would scan/analyze a person and make its prediction, all before the person ever heard of the dilemna. Therefore, a person has no way to develop an intention of being a one-boxer or a two-boxer that in any way affects Omega’s prediction. For the Irene/Rachel situation, there is no way to ever “precommit;” the subject never gets to play Omega’s game again and Omega scans their brains before they ever heard of him. (So imagine you only had one shot at playing Omega’s game, and Omega made its prediction before you ever came to this website or anywhere else and heard about Newcomb’s paradox. Then that already decides what it puts in the boxes.)
Secondly, I think a requirement of the problem is that your choice, at the time of actually taking the box(es), cannot effect what’s in the box. What we have here are two completely different problems; if in any way Omega or your choice information can travel back in time to change the contents of the box, the choice is trivial. So yes, Omega may have chosen to discriminate against rational people and award irrational ones; the point is, there is absolutely nothing we can do about it (neither in precommitment or at the actual time to choose).
To clarify why I think two-boxing is the right choice, I would propose a real life experiment. Let’s say we developed a survey, which, by asking people various questions about logic or the paranormal etc..., we use to classify them into one-boxers or two-boxers. The crux of the setup is, all the volunteers we take have never heard of the Newcomb Paradox; we make up any reason we want for them to take the survey. THEN, having already placed money or no money in box B, we give them the story about Omega and let them make the choice. Hypothetically, our survey could be 100% accurate; even if not it may be very accurate such that many of our predicted one-boxers will be glad to find their choice rewarded. In essence, they cannot “precommit” and their choice won’t magically change the contents of the box (based on a human survey). They also cannot go back and convince themselves to cheat on our survey—it’s impossible—and that is how Omega is supposed to operate. The point is, from the experimental point of view, every single person would make more from taking both boxes, because at the time of choice there’s always the extra $1000 in box A.
The key point you’ve missed in your analysis, however, is that Omega is almost always correct in his predictions.
It doesn’t matter how Omega does it—that is a separate problem. You don’t have enough information about his process of prediction to make any rational judgment about it except for the fact that it is a very, very good process. Brain scans, reversed causality, time travel, none of those ideas matter. In the paradox as originally posed, all you have are guesses about how he may have done it, and you would be an utter fool to give higher weight to those guesses than to the fact that Omega is always right.
The if observations (that Omega is always right) disagree with theory (that Omega cannot possibly be right), it is the theory that is wrong, every time.
Thus the rational agent should, in this situation, give extremely low weight to his understanding of the way the universe works, since it is obviously flawed (the existence of a perfect predictor proves this). The question really comes down to 100% chance of getting $1000 plus a nearly 0% chance of getting $1.01 million, vs nearly 100% chance of getting $1 million.
What really blows my mind about making the 2-box choice is that you can significantly reduce Omega’s ability to predict the outcome, and unless you are absolutely desperate for that $1000* the 2-box choice doesn’t become superior until Omega is only roughly 50% accurate (at 50.1% the outcome equalizes). Only then do you expect to get more money, on average, by choosing both boxes.
In other words, if you think Omega is doing anything but flipping a coin to determine the contents of box B, you are better off choosing box B.
*I could see the value of $1000 rising significantly if, for example, a man is holding a gun to your head and will kill you in two minutes if you don’t give him $1000. In this case, any uncertainty of Omega’s abilities are overshadowed by the certainty of the $1000. This inverts if the man with the gun is demanding more than $1000 - making the 2-box choice a non-option.
I’m a convinced two-boxer, but I’ll try to put my argument without any bias. It seems to me the way this problem has been put has been an attempt to rig it for the one boxers. When we talk about “precommitment” it is suggested the subject has an advance knowledge of Omega and what is to happen. The way I thought the paradox worked, was that Omega would scan/analyze a person and make its prediction, all before the person ever heard of the dilemna. Therefore, a person has no way to develop an intention of being a one-boxer or a two-boxer that in any way affects Omega’s prediction. For the Irene/Rachel situation, there is no way to ever “precommit;” the subject never gets to play Omega’s game again and Omega scans their brains before they ever heard of him. (So imagine you only had one shot at playing Omega’s game, and Omega made its prediction before you ever came to this website or anywhere else and heard about Newcomb’s paradox. Then that already decides what it puts in the boxes.)
Secondly, I think a requirement of the problem is that your choice, at the time of actually taking the box(es), cannot effect what’s in the box. What we have here are two completely different problems; if in any way Omega or your choice information can travel back in time to change the contents of the box, the choice is trivial. So yes, Omega may have chosen to discriminate against rational people and award irrational ones; the point is, there is absolutely nothing we can do about it (neither in precommitment or at the actual time to choose).
To clarify why I think two-boxing is the right choice, I would propose a real life experiment. Let’s say we developed a survey, which, by asking people various questions about logic or the paranormal etc..., we use to classify them into one-boxers or two-boxers. The crux of the setup is, all the volunteers we take have never heard of the Newcomb Paradox; we make up any reason we want for them to take the survey. THEN, having already placed money or no money in box B, we give them the story about Omega and let them make the choice. Hypothetically, our survey could be 100% accurate; even if not it may be very accurate such that many of our predicted one-boxers will be glad to find their choice rewarded. In essence, they cannot “precommit” and their choice won’t magically change the contents of the box (based on a human survey). They also cannot go back and convince themselves to cheat on our survey—it’s impossible—and that is how Omega is supposed to operate. The point is, from the experimental point of view, every single person would make more from taking both boxes, because at the time of choice there’s always the extra $1000 in box A.
The key point you’ve missed in your analysis, however, is that Omega is almost always correct in his predictions.
It doesn’t matter how Omega does it—that is a separate problem. You don’t have enough information about his process of prediction to make any rational judgment about it except for the fact that it is a very, very good process. Brain scans, reversed causality, time travel, none of those ideas matter. In the paradox as originally posed, all you have are guesses about how he may have done it, and you would be an utter fool to give higher weight to those guesses than to the fact that Omega is always right.
The if observations (that Omega is always right) disagree with theory (that Omega cannot possibly be right), it is the theory that is wrong, every time.
Thus the rational agent should, in this situation, give extremely low weight to his understanding of the way the universe works, since it is obviously flawed (the existence of a perfect predictor proves this). The question really comes down to 100% chance of getting $1000 plus a nearly 0% chance of getting $1.01 million, vs nearly 100% chance of getting $1 million.
What really blows my mind about making the 2-box choice is that you can significantly reduce Omega’s ability to predict the outcome, and unless you are absolutely desperate for that $1000* the 2-box choice doesn’t become superior until Omega is only roughly 50% accurate (at 50.1% the outcome equalizes). Only then do you expect to get more money, on average, by choosing both boxes.
In other words, if you think Omega is doing anything but flipping a coin to determine the contents of box B, you are better off choosing box B.
*I could see the value of $1000 rising significantly if, for example, a man is holding a gun to your head and will kill you in two minutes if you don’t give him $1000. In this case, any uncertainty of Omega’s abilities are overshadowed by the certainty of the $1000. This inverts if the man with the gun is demanding more than $1000 - making the 2-box choice a non-option.