I’m not sure which further details you are after.
Thanks for the response! I’m looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how ‘randomization’ is handled.
That makes a lot of sense, but I haven’t been able to find it stated formally. Wolpert and Benford’s papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven’t been able to tell how well they stand up to outside analysis.
If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb’s problem still relevant? If there’s no randomness, wouldn’t an appropriate application of CDT result in one-boxing since the decision-maker’s choice and Omega’s prediction are both causally determined by the decision-maker’s algorithm, which was fixed prior to the making of the decision?
There have been attempts to create derivatives of CDT that work like that. That replace the “C” from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
I’m curious: why can’t normal CDT handle it by itself? Consider two variants of Newcomb’s problem:
At run-time, you get to choose the actual decision made in Newcomb’s problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn’t have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
You get to write the algorithm, the output of which will determine the choice made in Newcomb’s problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker’s “choice” as well as Omega’s prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren’t free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.
(Note: I’m out of my depth here, and I haven’t given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)
I’d like to cite this article (or related published work) in a research project paper I’m writing which includes application of an expected utility-maximizing algorithm to a version of the prisoner’s dilemma. Do you have anything more cite-able than this article’s URL and your LW username? I didn’t see anything in your profile which could point me towards your real name and anything you might have published.