In Newcomb’s paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb’s paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb’s paradox using a recent extension of game theory in which the players set conditional probability distributions in a Bayes net. We show that the two game theory recommendations in Newcomb’s scenario have different presumptions for what Bayes net relates your choice and the algorithm’s prediction. We resolve the paradox by proving that these two Bayes nets are incompatible. We also show that the accuracy of the algorithm’s prediction, the focus of much previous work, is irrelevant. In addition we show that Newcomb’s scenario only provides a contradiction between game theory’s expected utility and dominance principles if one is sloppy in specifying the underlying Bayes net. We also show that Newcomb’s paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction’ after you make your choice rather than before.
In a competely preverse coincedence Benford’s law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb
http://en.wikipedia.org/wiki/Benford%27s_law
Okay, now that I’ve read section 2 of the paper (where it gives the two decompositions), it doesn’t seem so insightful. Here’s my summary of the Wolpert/Benford argument:
“There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega’s decision g, and Realist, where Omega’s decision causally influences yours.
“Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y → G. One-boxing is preferable. ”Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G → Y. Two-boxing is preferable.”
My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega’s act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.
Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky’s list of the ways that Newcomb’s problem, as stated, constrains a Bayes net.
For the non-link-clickers:
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (ETA: Since nothing is simultaneous with Omega’s act, then knowledge of Omega’s act screens off the influence of everything before it; on the Bayes net, Omega’s act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
New on arXiv:
David H. Wolpert, Gregory Benford. (2010). What does Newcomb’s paradox teach us?
See also:
Newcomb’s problem
Newcomb’s Problem standard positions
Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives
In a competely preverse coincedence Benford’s law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law
Okay, now that I’ve read section 2 of the paper (where it gives the two decompositions), it doesn’t seem so insightful. Here’s my summary of the Wolpert/Benford argument:
“There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega’s decision g, and Realist, where Omega’s decision causally influences yours.
“Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y → G. One-boxing is preferable.
”Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G → Y. Two-boxing is preferable.”
My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega’s act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.
Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky’s list of the ways that Newcomb’s problem, as stated, constrains a Bayes net.
For the non-link-clickers:
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (ETA: Since nothing is simultaneous with Omega’s act, then knowledge of Omega’s act screens off the influence of everything before it; on the Bayes net, Omega’s act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)