The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation.
This seems like a worthy approach to paradoxes! I’m going to suggest the possibility of broadening your search slightly. Specifically, to include the claim “and this is paradoxical” as one of the things that can be rejected as producing contradictions. Because in this case there just isn’t a paradox. You take the one box, get rich and if there is a decision theory that says to take both boxes you get a better theory. For this reason “Newcomb’s Paradox” is a misnomer and I would only use “Newcomb’s Problem” as an acceptable name.
In Newcomb’s problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent.
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. If the coin flip is replaced with a quantum coinflip then the problem becomes even worse because it leaves an Omega that can perfectly predict what will happen but is faced with a plainly inconsistent task of making contradictory things happen. The problem specification needs to include a clause for how ‘randomization’ is handled.
Alternatively, Omega knew the outcome of the coin flip in advance; let’s say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.)
Here is where I should be able to link you to the wiki page on free will where you would be given an explanation of why the notion that determinism is incompatible with choice is a confusion. Alas that page still has pretentious “Find Out For Yourself” tripe on it instead of useful content. The wikipedia page on compatibilism is somewhat useful but not particularly tailored to a reductionist decision theory focus.
In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent’s decision, the agent’s decision does determine what’s in the box, despite the usual claim of no causality. Taking that into account, CDT doesn’t fail after all.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I would love to hear from someone in further detail on these issues of consistency. Have they been addressed elsewhere? If so, where?
I’m not sure which further details you are after. Are you after a description of Newcomb’s problem that includes the details necessary to make it consistent? Or about other potential inconsistencies? Or other debates about whether the problems are inconsistent?
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how ‘randomization’ is handled.
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
You may be thinking of Huw Price’s paper available here
This seems like a worthy approach to paradoxes! I’m going to suggest the possibility of broadening your search slightly. Specifically, to include the claim “and this is paradoxical” as one of the things that can be rejected as producing contradictions. Because in this case there just isn’t a paradox. You take the one box, get rich and if there is a decision theory that says to take both boxes you get a better theory. For this reason “Newcomb’s Paradox” is a misnomer and I would only use “Newcomb’s Problem” as an acceptable name.
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. If the coin flip is replaced with a quantum coinflip then the problem becomes even worse because it leaves an Omega that can perfectly predict what will happen but is faced with a plainly inconsistent task of making contradictory things happen. The problem specification needs to include a clause for how ‘randomization’ is handled.
Here is where I should be able to link you to the wiki page on free will where you would be given an explanation of why the notion that determinism is incompatible with choice is a confusion. Alas that page still has pretentious “Find Out For Yourself” tripe on it instead of useful content. The wikipedia page on compatibilism is somewhat useful but not particularly tailored to a reductionist decision theory focus.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m not sure which further details you are after. Are you after a description of Newcomb’s problem that includes the details necessary to make it consistent? Or about other potential inconsistencies? Or other debates about whether the problems are inconsistent?
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.
You may be thinking of Huw Price’s paper available here