The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
If Omega is fallible (e.g. human), CDT still two-boxes even if Omega empirically seems to be wrong one time in a million.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
Here is my EDT calculation:
calculate p(2box|1box prediction)1001000+p(2box|2box prediction)1000=1001000(1-p)+1000p
calculate p(1box|1box prediction)1001000+p(1box|2box prediction)1000=1001000p+1000(1-p)
pick largest of the two (which is 1-box if p < 50%, 2-box if p > 50%).
Thus one should 1-box even if Omega is slightly better than chance.