Thanks for this post; it articulates many of the thoughts I’ve had on the apparent inconsistency of common decision-theoretic paradoxes such as Newcomb’s problem. I’m not an expert in decision theory, but I have a computer science background and significant exposure to these topics, so let me give it a shot.
The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation. In Newcomb’s problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent.
Alternatively, Omega knew the outcome of the coin flip in advance; let’s say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.) Admittedly, as you say quite eloquently:
Choice is not something inherent to a system, but a feature of an outsider’s model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.
In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent’s decision, the agent’s decision does determine what’s in the box, despite the usual claim of no causality. Taking that into account, CDT doesn’t fail after all.
It really does seem to me like most of these supposed paradoxes of decision theory have these inconsistent setups. I see that wedrifid says of coin flips:
If the FAQ left this out then it is indeed faulty. It should either specify that if Omega predicts the human will use that kind of entropy then it gets a “Fuck you” (gets nothing in the big box, or worse) or, at best, that Omega awards that kind of randomization with a proportional payoff (ie. If behavior is determined by a fair coin then the big box contains half the money.)
This is a fairly typical (even “Frequent”) question so needs to be included in the problem specification. But it can just be considered a minor technical detail.
I would love to hear from someone in further detail on these issues of consistency. Have they been addressed elsewhere? If so, where?
The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation.
This seems like a worthy approach to paradoxes! I’m going to suggest the possibility of broadening your search slightly. Specifically, to include the claim “and this is paradoxical” as one of the things that can be rejected as producing contradictions. Because in this case there just isn’t a paradox. You take the one box, get rich and if there is a decision theory that says to take both boxes you get a better theory. For this reason “Newcomb’s Paradox” is a misnomer and I would only use “Newcomb’s Problem” as an acceptable name.
In Newcomb’s problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent.
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. If the coin flip is replaced with a quantum coinflip then the problem becomes even worse because it leaves an Omega that can perfectly predict what will happen but is faced with a plainly inconsistent task of making contradictory things happen. The problem specification needs to include a clause for how ‘randomization’ is handled.
Alternatively, Omega knew the outcome of the coin flip in advance; let’s say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.)
Here is where I should be able to link you to the wiki page on free will where you would be given an explanation of why the notion that determinism is incompatible with choice is a confusion. Alas that page still has pretentious “Find Out For Yourself” tripe on it instead of useful content. The wikipedia page on compatibilism is somewhat useful but not particularly tailored to a reductionist decision theory focus.
In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent’s decision, the agent’s decision does determine what’s in the box, despite the usual claim of no causality. Taking that into account, CDT doesn’t fail after all.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I would love to hear from someone in further detail on these issues of consistency. Have they been addressed elsewhere? If so, where?
I’m not sure which further details you are after. Are you after a description of Newcomb’s problem that includes the details necessary to make it consistent? Or about other potential inconsistencies? Or other debates about whether the problems are inconsistent?
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how ‘randomization’ is handled.
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
You may be thinking of Huw Price’s paper available here
Thanks for this post; it articulates many of the thoughts I’ve had on the apparent inconsistency of common decision-theoretic paradoxes such as Newcomb’s problem. I’m not an expert in decision theory, but I have a computer science background and significant exposure to these topics, so let me give it a shot.
The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation. In Newcomb’s problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent.
Alternatively, Omega knew the outcome of the coin flip in advance; let’s say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.) Admittedly, as you say quite eloquently:
In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent’s decision, the agent’s decision does determine what’s in the box, despite the usual claim of no causality. Taking that into account, CDT doesn’t fail after all.
It really does seem to me like most of these supposed paradoxes of decision theory have these inconsistent setups. I see that wedrifid says of coin flips:
I would love to hear from someone in further detail on these issues of consistency. Have they been addressed elsewhere? If so, where?
This seems like a worthy approach to paradoxes! I’m going to suggest the possibility of broadening your search slightly. Specifically, to include the claim “and this is paradoxical” as one of the things that can be rejected as producing contradictions. Because in this case there just isn’t a paradox. You take the one box, get rich and if there is a decision theory that says to take both boxes you get a better theory. For this reason “Newcomb’s Paradox” is a misnomer and I would only use “Newcomb’s Problem” as an acceptable name.
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. If the coin flip is replaced with a quantum coinflip then the problem becomes even worse because it leaves an Omega that can perfectly predict what will happen but is faced with a plainly inconsistent task of making contradictory things happen. The problem specification needs to include a clause for how ‘randomization’ is handled.
Here is where I should be able to link you to the wiki page on free will where you would be given an explanation of why the notion that determinism is incompatible with choice is a confusion. Alas that page still has pretentious “Find Out For Yourself” tripe on it instead of useful content. The wikipedia page on compatibilism is somewhat useful but not particularly tailored to a reductionist decision theory focus.
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m not sure which further details you are after. Are you after a description of Newcomb’s problem that includes the details necessary to make it consistent? Or about other potential inconsistencies? Or other debates about whether the problems are inconsistent?
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.
You may be thinking of Huw Price’s paper available here