If you rule out probabilities of 1, what do you assign to the probability that Omega is cheating, and somehow gimmicking the boxes to change the contents the instant you indicate your choice, before the contents are revealed?
Presumably the mechanisms of “correct prediction” are irrelevant, and once your expectation that this instance will be predicted correctly gets above million-to-one, you one-box.
All right, yes. But that isn’t how anyone has ever interpreted Newcomb’s Problem. AFAIK is literally always used to support some kind of acausal decision theory, which it does /not/ if what is in fact happening is that Omega is cheating.
note: this was 7 years ago and I’ve refined my understanding of CDT and the Newcomb problem since.
My current understanding of CDT is that it’s does effectively assign a confidence of 1 to the decision not being causally upstream of Omega’s action, and that is the whole of the problem. It’s “solved” by just moving Omega’s action downstream (by cheating and doing a rapid switch). It’s … illustrated? … by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it’s decided. It’s also “solved” by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.
For non-rigorous DTs (like human intuition, and what I personally would want to do), there’s a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call. If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says “sometimes both boxes have money, sometimes only one”), then it’s a pretty straightforward EV calculation based on kind of informal probability assignments.
But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.
If you rule out probabilities of 1, what do you assign to the probability that Omega is cheating, and somehow gimmicking the boxes to change the contents the instant you indicate your choice, before the contents are revealed?
Presumably the mechanisms of “correct prediction” are irrelevant, and once your expectation that this instance will be predicted correctly gets above million-to-one, you one-box.
All right, yes. But that isn’t how anyone has ever interpreted Newcomb’s Problem. AFAIK is literally always used to support some kind of acausal decision theory, which it does /not/ if what is in fact happening is that Omega is cheating.
note: this was 7 years ago and I’ve refined my understanding of CDT and the Newcomb problem since.
My current understanding of CDT is that it’s does effectively assign a confidence of 1 to the decision not being causally upstream of Omega’s action, and that is the whole of the problem. It’s “solved” by just moving Omega’s action downstream (by cheating and doing a rapid switch). It’s … illustrated? … by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it’s decided. It’s also “solved” by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.
For non-rigorous DTs (like human intuition, and what I personally would want to do), there’s a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call. If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says “sometimes both boxes have money, sometimes only one”), then it’s a pretty straightforward EV calculation based on kind of informal probability assignments.
But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.