I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
it is pretty clear that the Newcomb’s Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs. Furthermore, in being told that Omega’s leaving box B full or empty correlates to our decision to take only one box or both boxes, and that Omega’s act lies in the past, and that Omega’s act is not directly influencing us, and that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output, then we’re being told in unambiguous terms (I think) to make our own physical act and Omega’s act a common descendant of the unknown logical output of our known computation.[italics left off]
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
Ah, okay, thanks. I can start reading those, then.