note: this was 7 years ago and I’ve refined my understanding of CDT and the Newcomb problem since.
My current understanding of CDT is that it’s does effectively assign a confidence of 1 to the decision not being causally upstream of Omega’s action, and that is the whole of the problem. It’s “solved” by just moving Omega’s action downstream (by cheating and doing a rapid switch). It’s … illustrated? … by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it’s decided. It’s also “solved” by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.
For non-rigorous DTs (like human intuition, and what I personally would want to do), there’s a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call. If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says “sometimes both boxes have money, sometimes only one”), then it’s a pretty straightforward EV calculation based on kind of informal probability assignments.
But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.
note: this was 7 years ago and I’ve refined my understanding of CDT and the Newcomb problem since.
My current understanding of CDT is that it’s does effectively assign a confidence of 1 to the decision not being causally upstream of Omega’s action, and that is the whole of the problem. It’s “solved” by just moving Omega’s action downstream (by cheating and doing a rapid switch). It’s … illustrated? … by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it’s decided. It’s also “solved” by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.
For non-rigorous DTs (like human intuition, and what I personally would want to do), there’s a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call. If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says “sometimes both boxes have money, sometimes only one”), then it’s a pretty straightforward EV calculation based on kind of informal probability assignments.
But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.