Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how ‘randomization’ is handled.
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
You can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively—E contains argmax as well—and it just so happens that the resulting expression is only well defined if there’s nothing in the first box and you choose both boxes. I’m writing a short paper about that.