I don’t think the way you’re phrasing that is very useful. If you write up a CDT algorithm and then put it into a Newcomb’s problem simulator, it will do something. It’s playing the game; maybe not well, but it’s playing.
Perhaps you could say, “‘CDT’ is poorly named, if you follow actual the actual principles of causality, you’ll get an algorithm that gets the right answer” (I’ve seen people make a claim like that). Or “you can think of CDT reframing the problem as an easier one that it knows how to play, but is substantially different and thus gets the wrong answer”. Or something else like that.
The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
Thank you, you just confirmed what I posted as a reply to “see”, which is that CDT doesn’t play in Newcomb at all.
I don’t think the way you’re phrasing that is very useful. If you write up a CDT algorithm and then put it into a Newcomb’s problem simulator, it will do something. It’s playing the game; maybe not well, but it’s playing.
Perhaps you could say, “‘CDT’ is poorly named, if you follow actual the actual principles of causality, you’ll get an algorithm that gets the right answer” (I’ve seen people make a claim like that). Or “you can think of CDT reframing the problem as an easier one that it knows how to play, but is substantially different and thus gets the wrong answer”. Or something else like that.
The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
If Omega is fallible (e.g. human), CDT still two-boxes even if Omega empirically seems to be wrong one time in a million.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
Here is my EDT calculation:
calculate p(2box|1box prediction)1001000+p(2box|2box prediction)1000=1001000(1-p)+1000p
calculate p(1box|1box prediction)1001000+p(1box|2box prediction)1000=1001000p+1000(1-p)
pick largest of the two (which is 1-box if p < 50%, 2-box if p > 50%).
Thus one should 1-box even if Omega is slightly better than chance.