Reposting this from last week’s open thread because it seemed to get buried
Is Newcomb’s Paradox solved? I don’t mean from a decision standpoint, but the logical knot of “it is clearly, obviously better two one-box, and it is clearly, logically proven better to two-box”. I think I have a satisfying solution, but it might be old news.
It’s solved for anyone who doesn’t believe in magical “free will”. If it’s possible for Omega to correctly predict your action, then it’s only sane to one-box. Only decision systems that deny this ability to predict will two-box.
Causal Decision Theory, because it assumes single-direction-causality (a later event can’t cause an earlier one), can be said to deny this prediction. But even that’s easily solved by assuming an earlier common cause (the state of the universe that causes Omega’s prediction also causes your choice), as long as you don’t demand actual free will.
I agree that it’s clear that you should one box – I’m more talking about justifying why one-boxing is in fact correct when it can’t logically influence whether there is money in the box. Initially I found this to be unnerving initially, but maybe I was the only one.
The correct solution is not to one-box. It is to decide based on the flip of a coin. Take that, Omega.
Seriously, the problem is over-constrained to the point of being meaningless, not representing reality at all. Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation, which surprise surprise has weird results. “Infinities in math problems leads to paradox: News at 11.”
The setup of the problem assumes that Omega has full knowledge about your decision making process and that you have no reciprocal insight into its own, other than assumption that its simulation of you is correct. Well, of course the correct answer then is to one-box, if you insist on deterministic processes, because by definition two-boxing results in empty boxes. This only feels weird because it seems acausal. But the solution is equivalent as Dagon said to eliminating free will—without the intuitive assumption of free will the outcome is predictable and boring. Imagine the “you” in the setup was replaced with a very boring robotic arm with no intelligence that followed a very strict program to pick up either one or both of the boxes, but was explicitly programmed to do one or the other. Omega walks up, checks the source code to see whether it is configured to one-box or two-box, and fills the boxes accordingly.
The weirdness comes when we replace the robot with “you”, except a version of you that is artificially constrained to be deterministic and for which the physically implausible assumption is made that Omega can accurately simulate you. It’s a problem of bad definitions, the sort of thing the Human’s Guide to Words warns us against. Taboo the “you” in the setup of the problem and you find something more resembling the robotic arm than an actual person, for the purposes of the problem setup.
However if you change the setup to be two Omegas of finite capability—“you” have full access to the decision-making facilities of Omega, as well as vice versa, then the problem no longer has an solution independent of the peculiarities of the participants involved. It becomes an adversarial situation where the winner is the one that out-smarted the other, or if equally matched it reduces to chance. Unless you think you are outclassed by your opponent, two-boxing has a chance here. Indeed the coin-flip decision making criteria I snarkily gave above has 5x the expected reward of one-boxing in the usual setup, and is probably a Schelling point for equally classed opponents.
Actual instances of Newcomb’s problem in the real world resemble the latter, not the former. Hence the breakdown in intuition—a lot of people insist on two-boxing because in real world problems that IS the best solution. Others insist on one-boxing because they are better able to accurately read instructions and/or treat the setup of the problem on face value, even though it is very far removed from reality.
As Dagon said, the assumption that your choice is independent of the state of the universe is flawed. Any real instance of Newcomb’s problem has some uncertainty in both your and the opponent’s mind about the state of the other, and becomes an adversarial problem for which there is no situationally independent solution.
Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation
...which makes a part of your brain scream: “As far as I know, this is not possible. Someone is trying to scam you. You should grab both boxes and run!”
I call that the pragmatic intuition. It is the heuristic to disagree/distrust anything not grounded in physical reality. Some people, particularly mathematicians, lack this intuition. Others, particularly seasoned engineers, have it in spades. I think it is a useful heuristic to have, particularly if you want your beliefs to reflect the real world.
when it can’t logically influence whether there is money in the box.
That’s pretty much the heart of the issue, isn’t it? Clearly, by the omniscience of Omega’s prediction, your choice is extremely correlated with what’s in the box. So whether your choice determines the box contents, the box contents determines your choice, or some other thing determines both your choice and the box contents, there is a “logical influence” between your choice and the money in the box.
The assumption that your choice is independent of the state of the universe is flawed.
If you assume away free will, the problem loses meaning. What is that “choice” between one and two boxes that you are supposed to make? You don’t make any choices.
Indeed. You experience results of the progression of states of the universe. It feels like you’re making a choice, but that’s illusory. Not so much “assume away” free will, but “dissolve the concept” and recognize that it’s meaningless.
Or, at least that’s the case in a universe where Omega can perfectly (or even near-perfecty) predict your “choices”—choice is meaningless if it’s that predictable. It’s not actually proven that this is possible, or that our universe (including consciousness) works that way.
I don’t know how else it would work. But I also don’t know how it could work in the first place, so that doesn’t tell us much. Omega doesn’t (as far as I know) actually exist, so “it doesn’t work at all” is a justifiable answer as well.
There’s only so much you can learn about the actual universe from thought experiments.
Thanks for starting the discussion, but please ALSO post your solution. Pretty much everything on the topic is old news, so no harm even if yours is already known to some.
I don’t see an issue besides the decision standpoint that matters (and that could be solved). Depending on where you see the issue it’s likely dependent on the assumptions you make about the problem.
What we have is a list of proposed decision theories (Evidential Decision Theory, Causal Decision Theory, Timeless Decision Theory, Updateless Decision Theory), each of which acts the same on standard decisions, but which deal with Newcomb-like problems differently. Some of these decision theories satisfy nice general properties which we would want a decision theory to satisfy. There’s argument about which decision theory is correct, but also about what the various decision theories actually do in various situations. For example CDT is normally thought of as being the two-boxing theory that people intuitively use, but some people argue that it should take into account the possibility that it is in Omega’s simulation and hence it even people following CDT should actually one-box.
So the discussion is more nuanced than “What is the correct thing to do in Newcomb’s problem?”, it’s more “By what general criteria should we judge a decision theory?”. Of course any particular insight you have about Newcomb’s problem might generalise to this way of looking at things.
Reposting this from last week’s open thread because it seemed to get buried
Is Newcomb’s Paradox solved? I don’t mean from a decision standpoint, but the logical knot of “it is clearly, obviously better two one-box, and it is clearly, logically proven better to two-box”. I think I have a satisfying solution, but it might be old news.
It’s solved for anyone who doesn’t believe in magical “free will”. If it’s possible for Omega to correctly predict your action, then it’s only sane to one-box. Only decision systems that deny this ability to predict will two-box.
Causal Decision Theory, because it assumes single-direction-causality (a later event can’t cause an earlier one), can be said to deny this prediction. But even that’s easily solved by assuming an earlier common cause (the state of the universe that causes Omega’s prediction also causes your choice), as long as you don’t demand actual free will.
I agree that it’s clear that you should one box – I’m more talking about justifying why one-boxing is in fact correct when it can’t logically influence whether there is money in the box. Initially I found this to be unnerving initially, but maybe I was the only one.
The correct solution is not to one-box. It is to decide based on the flip of a coin. Take that, Omega.
Seriously, the problem is over-constrained to the point of being meaningless, not representing reality at all. Part of the problem that leads to intuition breakdown is that the setup deals with omniscient knowledge and infinite computation, which surprise surprise has weird results. “Infinities in math problems leads to paradox: News at 11.”
The setup of the problem assumes that Omega has full knowledge about your decision making process and that you have no reciprocal insight into its own, other than assumption that its simulation of you is correct. Well, of course the correct answer then is to one-box, if you insist on deterministic processes, because by definition two-boxing results in empty boxes. This only feels weird because it seems acausal. But the solution is equivalent as Dagon said to eliminating free will—without the intuitive assumption of free will the outcome is predictable and boring. Imagine the “you” in the setup was replaced with a very boring robotic arm with no intelligence that followed a very strict program to pick up either one or both of the boxes, but was explicitly programmed to do one or the other. Omega walks up, checks the source code to see whether it is configured to one-box or two-box, and fills the boxes accordingly.
The weirdness comes when we replace the robot with “you”, except a version of you that is artificially constrained to be deterministic and for which the physically implausible assumption is made that Omega can accurately simulate you. It’s a problem of bad definitions, the sort of thing the Human’s Guide to Words warns us against. Taboo the “you” in the setup of the problem and you find something more resembling the robotic arm than an actual person, for the purposes of the problem setup.
However if you change the setup to be two Omegas of finite capability—“you” have full access to the decision-making facilities of Omega, as well as vice versa, then the problem no longer has an solution independent of the peculiarities of the participants involved. It becomes an adversarial situation where the winner is the one that out-smarted the other, or if equally matched it reduces to chance. Unless you think you are outclassed by your opponent, two-boxing has a chance here. Indeed the coin-flip decision making criteria I snarkily gave above has 5x the expected reward of one-boxing in the usual setup, and is probably a Schelling point for equally classed opponents.
Actual instances of Newcomb’s problem in the real world resemble the latter, not the former. Hence the breakdown in intuition—a lot of people insist on two-boxing because in real world problems that IS the best solution. Others insist on one-boxing because they are better able to accurately read instructions and/or treat the setup of the problem on face value, even though it is very far removed from reality.
As Dagon said, the assumption that your choice is independent of the state of the universe is flawed. Any real instance of Newcomb’s problem has some uncertainty in both your and the opponent’s mind about the state of the other, and becomes an adversarial problem for which there is no situationally independent solution.
...which makes a part of your brain scream: “As far as I know, this is not possible. Someone is trying to scam you. You should grab both boxes and run!”
I call that the pragmatic intuition. It is the heuristic to disagree/distrust anything not grounded in physical reality. Some people, particularly mathematicians, lack this intuition. Others, particularly seasoned engineers, have it in spades. I think it is a useful heuristic to have, particularly if you want your beliefs to reflect the real world.
That’s pretty much the heart of the issue, isn’t it? Clearly, by the omniscience of Omega’s prediction, your choice is extremely correlated with what’s in the box. So whether your choice determines the box contents, the box contents determines your choice, or some other thing determines both your choice and the box contents, there is a “logical influence” between your choice and the money in the box.
The assumption that your choice is independent of the state of the universe is flawed.
If you assume away free will, the problem loses meaning. What is that “choice” between one and two boxes that you are supposed to make? You don’t make any choices.
Indeed. You experience results of the progression of states of the universe. It feels like you’re making a choice, but that’s illusory. Not so much “assume away” free will, but “dissolve the concept” and recognize that it’s meaningless.
Or, at least that’s the case in a universe where Omega can perfectly (or even near-perfecty) predict your “choices”—choice is meaningless if it’s that predictable. It’s not actually proven that this is possible, or that our universe (including consciousness) works that way.
How else would it work? Where is the decision going to come from that Omega can’t see?
I don’t know how else it would work. But I also don’t know how it could work in the first place, so that doesn’t tell us much. Omega doesn’t (as far as I know) actually exist, so “it doesn’t work at all” is a justifiable answer as well.
There’s only so much you can learn about the actual universe from thought experiments.
Thanks for starting the discussion, but please ALSO post your solution. Pretty much everything on the topic is old news, so no harm even if yours is already known to some.
I don’t see an issue besides the decision standpoint that matters (and that could be solved). Depending on where you see the issue it’s likely dependent on the assumptions you make about the problem.
What we have is a list of proposed decision theories (Evidential Decision Theory, Causal Decision Theory, Timeless Decision Theory, Updateless Decision Theory), each of which acts the same on standard decisions, but which deal with Newcomb-like problems differently. Some of these decision theories satisfy nice general properties which we would want a decision theory to satisfy. There’s argument about which decision theory is correct, but also about what the various decision theories actually do in various situations. For example CDT is normally thought of as being the two-boxing theory that people intuitively use, but some people argue that it should take into account the possibility that it is in Omega’s simulation and hence it even people following CDT should actually one-box.
So the discussion is more nuanced than “What is the correct thing to do in Newcomb’s problem?”, it’s more “By what general criteria should we judge a decision theory?”. Of course any particular insight you have about Newcomb’s problem might generalise to this way of looking at things.