I agree that it’s clear that you should one box – I’m more talking about justifying why one-boxing is in fact correct when it can’t logically influence whether there is money in the box. Initially I found this to be unnerving initially, but maybe I was the only one.
The correct solution is not to one-box. It is to decide based on the flip of a coin. Take that, Omega.
Seriously, the problem is over-constrained to the point of being meaningless, not representing reality at all. Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation, which surprise surprise has weird results. “Infinities in math problems leads to paradox: News at 11.”
The setup of the problem assumes that Omega has full knowledge about your decision making process and that you have no reciprocal insight into its own, other than assumption that its simulation of you is correct. Well, of course the correct answer then is to one-box, if you insist on deterministic processes, because by definition two-boxing results in empty boxes. This only feels weird because it seems acausal. But the solution is equivalent as Dagon said to eliminating free will—without the intuitive assumption of free will the outcome is predictable and boring. Imagine the “you” in the setup was replaced with a very boring robotic arm with no intelligence that followed a very strict program to pick up either one or both of the boxes, but was explicitly programmed to do one or the other. Omega walks up, checks the source code to see whether it is configured to one-box or two-box, and fills the boxes accordingly.
The weirdness comes when we replace the robot with “you”, except a version of you that is artificially constrained to be deterministic and for which the physically implausible assumption is made that Omega can accurately simulate you. It’s a problem of bad definitions, the sort of thing the Human’s Guide to Words warns us against. Taboo the “you” in the setup of the problem and you find something more resembling the robotic arm than an actual person, for the purposes of the problem setup.
However if you change the setup to be two Omegas of finite capability—“you” have full access to the decision-making facilities of Omega, as well as vice versa, then the problem no longer has an solution independent of the peculiarities of the participants involved. It becomes an adversarial situation where the winner is the one that out-smarted the other, or if equally matched it reduces to chance. Unless you think you are outclassed by your opponent, two-boxing has a chance here. Indeed the coin-flip decision making criteria I snarkily gave above has 5x the expected reward of one-boxing in the usual setup, and is probably a Schelling point for equally classed opponents.
Actual instances of Newcomb’s problem in the real world resemble the latter, not the former. Hence the breakdown in intuition—a lot of people insist on two-boxing because in real world problems that IS the best solution. Others insist on one-boxing because they are better able to accurately read instructions and/or treat the setup of the problem on face value, even though it is very far removed from reality.
As Dagon said, the assumption that your choice is independent of the state of the universe is flawed. Any real instance of Newcomb’s problem has some uncertainty in both your and the opponent’s mind about the state of the other, and becomes an adversarial problem for which there is no situationally independent solution.
Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation
...which makes a part of your brain scream: “As far as I know, this is not possible. Someone is trying to scam you. You should grab both boxes and run!”
I call that the pragmatic intuition. It is the heuristic to disagree/distrust anything not grounded in physical reality. Some people, particularly mathematicians, lack this intuition. Others, particularly seasoned engineers, have it in spades. I think it is a useful heuristic to have, particularly if you want your beliefs to reflect the real world.
when it can’t logically influence whether there is money in the box.
That’s pretty much the heart of the issue, isn’t it? Clearly, by the omniscience of Omega’s prediction, your choice is extremely correlated with what’s in the box. So whether your choice determines the box contents, the box contents determines your choice, or some other thing determines both your choice and the box contents, there is a “logical influence” between your choice and the money in the box.
The assumption that your choice is independent of the state of the universe is flawed.
I agree that it’s clear that you should one box – I’m more talking about justifying why one-boxing is in fact correct when it can’t logically influence whether there is money in the box. Initially I found this to be unnerving initially, but maybe I was the only one.
The correct solution is not to one-box. It is to decide based on the flip of a coin. Take that, Omega.
Seriously, the problem is over-constrained to the point of being meaningless, not representing reality at all. Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation, which surprise surprise has weird results. “Infinities in math problems leads to paradox: News at 11.”
The setup of the problem assumes that Omega has full knowledge about your decision making process and that you have no reciprocal insight into its own, other than assumption that its simulation of you is correct. Well, of course the correct answer then is to one-box, if you insist on deterministic processes, because by definition two-boxing results in empty boxes. This only feels weird because it seems acausal. But the solution is equivalent as Dagon said to eliminating free will—without the intuitive assumption of free will the outcome is predictable and boring. Imagine the “you” in the setup was replaced with a very boring robotic arm with no intelligence that followed a very strict program to pick up either one or both of the boxes, but was explicitly programmed to do one or the other. Omega walks up, checks the source code to see whether it is configured to one-box or two-box, and fills the boxes accordingly.
The weirdness comes when we replace the robot with “you”, except a version of you that is artificially constrained to be deterministic and for which the physically implausible assumption is made that Omega can accurately simulate you. It’s a problem of bad definitions, the sort of thing the Human’s Guide to Words warns us against. Taboo the “you” in the setup of the problem and you find something more resembling the robotic arm than an actual person, for the purposes of the problem setup.
However if you change the setup to be two Omegas of finite capability—“you” have full access to the decision-making facilities of Omega, as well as vice versa, then the problem no longer has an solution independent of the peculiarities of the participants involved. It becomes an adversarial situation where the winner is the one that out-smarted the other, or if equally matched it reduces to chance. Unless you think you are outclassed by your opponent, two-boxing has a chance here. Indeed the coin-flip decision making criteria I snarkily gave above has 5x the expected reward of one-boxing in the usual setup, and is probably a Schelling point for equally classed opponents.
Actual instances of Newcomb’s problem in the real world resemble the latter, not the former. Hence the breakdown in intuition—a lot of people insist on two-boxing because in real world problems that IS the best solution. Others insist on one-boxing because they are better able to accurately read instructions and/or treat the setup of the problem on face value, even though it is very far removed from reality.
As Dagon said, the assumption that your choice is independent of the state of the universe is flawed. Any real instance of Newcomb’s problem has some uncertainty in both your and the opponent’s mind about the state of the other, and becomes an adversarial problem for which there is no situationally independent solution.
...which makes a part of your brain scream: “As far as I know, this is not possible. Someone is trying to scam you. You should grab both boxes and run!”
I call that the pragmatic intuition. It is the heuristic to disagree/distrust anything not grounded in physical reality. Some people, particularly mathematicians, lack this intuition. Others, particularly seasoned engineers, have it in spades. I think it is a useful heuristic to have, particularly if you want your beliefs to reflect the real world.
That’s pretty much the heart of the issue, isn’t it? Clearly, by the omniscience of Omega’s prediction, your choice is extremely correlated with what’s in the box. So whether your choice determines the box contents, the box contents determines your choice, or some other thing determines both your choice and the box contents, there is a “logical influence” between your choice and the money in the box.
The assumption that your choice is independent of the state of the universe is flawed.