Fundamentally, the Newcomb problem is about weighing off our confidence in CDT versus the evidence about Omega’s predictive skill. If we have confidence in both, but they point in different directions, then it’s which we have more confidence in. This kind of trade off happens all the time, just not to this degree.
We could write it out in Jaynes’ notation, and that might make it clear to those to whom it isn’t already.
Nice post.
Fundamentally, the Newcomb problem is about weighing off our confidence in CDT versus the evidence about Omega’s predictive skill. If we have confidence in both, but they point in different directions, then it’s which we have more confidence in. This kind of trade off happens all the time, just not to this degree.
We could write it out in Jaynes’ notation, and that might make it clear to those to whom it isn’t already.