Naively in the actual Newcombe’s problem if omega is only correct 1⁄999,000+epsilon percent of the time then CDT seems to do about as well as whatever theory that solves this problem.
This is not quite correct; this comment hints at why. CDT will sever the causal links pointing in to your decision, and so if you don’t think that what you choose to do will affect what Omega has guessed in the past, then it doesn’t matter how good a guesser you think Omega is.
The reason Newcomb’s Problem proper causes such headache and discussion is, in my mind, a failure to separate what causation means in reality and what causation means in decision theory. A model of Newcomb’s problem proper which has our decision causing Omega’s prediction violates realistic assumptions that the future cannot cause the past; a model of Newcomb’s problem proper which has our decision not causing Omega’s prediction violates the problem statement that Omega is a perfect predictor (i.e. we don’t have an arrow, which implies two variables are independent, but in fact those variables are dependent).
If you discard the requirement that causes seem physically reasonable, then CDT can reason in the general case here. (You just stick the probabilistic depedence in like you would any other.) The issue is that, in reality, requiring influences to be real makes good sense!
I think my original post may have been unclear. Sorry about that.
What I meant was not that how accurate omega is impacts what CDC does. What I meant was that the accuracy impacts how much “pick up” you can get from a better theory. So if omega is perfect one boxing get you 1,000,000 vs 1000 from two boxing for an increase of 999,000. If omega is less than perfect, then sometimes the one boxer gets nothing or the two boxer gets 1001000. This brings their average results closer. At some accuracy, P, CDC and the theory which solves the problem and correctly chooses to one box do almost equally well.
Omegas accuracy is related to the information leakage about the choosers decision theory.
What I meant was that the accuracy impacts how much “pick up” you can get from a better theory.
Agreed. Because of the simplicity of Newcomb’s proper, I think this is going to make for an unimpressive graph, though: the rewards are linear in Omega’s accuracy P, so it should just be a simple piecewise function for the clever theory, diverging from the two-boxer at the low accuracy and eventually reaching the increase of $999,000 at P=1.
This is not quite correct; this comment hints at why. CDT will sever the causal links pointing in to your decision, and so if you don’t think that what you choose to do will affect what Omega has guessed in the past, then it doesn’t matter how good a guesser you think Omega is.
The reason Newcomb’s Problem proper causes such headache and discussion is, in my mind, a failure to separate what causation means in reality and what causation means in decision theory. A model of Newcomb’s problem proper which has our decision causing Omega’s prediction violates realistic assumptions that the future cannot cause the past; a model of Newcomb’s problem proper which has our decision not causing Omega’s prediction violates the problem statement that Omega is a perfect predictor (i.e. we don’t have an arrow, which implies two variables are independent, but in fact those variables are dependent).
If you discard the requirement that causes seem physically reasonable, then CDT can reason in the general case here. (You just stick the probabilistic depedence in like you would any other.) The issue is that, in reality, requiring influences to be real makes good sense!
I think my original post may have been unclear. Sorry about that.
What I meant was not that how accurate omega is impacts what CDC does. What I meant was that the accuracy impacts how much “pick up” you can get from a better theory. So if omega is perfect one boxing get you 1,000,000 vs 1000 from two boxing for an increase of 999,000. If omega is less than perfect, then sometimes the one boxer gets nothing or the two boxer gets 1001000. This brings their average results closer. At some accuracy, P, CDC and the theory which solves the problem and correctly chooses to one box do almost equally well.
Omegas accuracy is related to the information leakage about the choosers decision theory.
Agreed. Because of the simplicity of Newcomb’s proper, I think this is going to make for an unimpressive graph, though: the rewards are linear in Omega’s accuracy P, so it should just be a simple piecewise function for the clever theory, diverging from the two-boxer at the low accuracy and eventually reaching the increase of $999,000 at P=1.