AFAIU, the point in parentheses basically amounts to the idea that in the absence of any known causal links I should use EDT (=Bayesian reasoning)
You use all that is known about how events, including your own decision, depend on each other. Some of these dependencies can’t withstand your interventions, which are often themselves coming out of the error terms. In this way, EDT is the same as TDT, its errors originating from failure to recognize this effect of breaking correlations and (a flaw shared with CDT) from unwillingness in include abstract computations in the models. CDT, on the other hand, severs too many dependencies by using its causal graph surgery heuristic.
My correction of the problem statement makes sure that the dependence of Omega’s prediction on your decision is not something that can be broken by your decision, so graph surgery should spare it. (In CDT terms, both your decision and Omega’s prediction depend on your original state, and CDT mistakenly severs this dependence by thinking its decision uncaused.)
My correction of the problem statement makes sure that the dependence of Omega’s prediction on your decision is not something that can be broken by your decision, so graph surgery should spare it.
But when you make this correction, and then compare agents performance based on it, you should place the agents in the same situation, if the comparison is to be fair. In particular, the situation must be the same regarding the knowledge of this correction—knowledge that “the dependence of Omega’s prediction on your decision is not something that can be broken by your decision”. In a regular analysis here on LW of Newcomb’s problem, TDT receives an unfair advantage, in that it is given this knowledge while CDT is not, presumably because CDT cannot represent it.
But in fact it can—why not? If it means drawing causal arrows backwards in time, so what?
And in case of “pure” Newcomb’s problem, where the agent knows that Omega is 100% correct, even the backward causal arrows are not needed. I think. That was what my original comment was about, and so far no one answered...
In a regular analysis here on LW of Newcomb’s problem, TDT receives an unfair advantage, in that it is given this knowledge while CDT is not, presumably because CDT cannot represent it.
The comparison doesn’t necessarily have to be fair, it only needs to accurately discern the fittest. A cat, for example, won’t even notice that an IQ test is presented before it, but that doesn’t mean that we have to make adjustments, that the conclusion is incorrect.
If it means drawing causal arrows backwards in time, so what?
Updates are propagated in both directions, so you draw causal arrows only forwards in time, just don’t sever this particular arrow during standard graph surgery on a standard-ish causal graph, so that knowledge about your decision tells you something about its origins in the past, and then about the other effects of those origins on the present. But CDT is too stubborn to do that, and a re-educated CDT is not a CDT anymore, it’s half-way towards becoming a TDT.
The comparison doesn’t necessarily have to be fair, it only needs to accurately discern the fittest. A cat, for example, won’t even notice that an IQ test is presented before it, but that doesn’t mean that we have to make adjustments, that the conclusion is incorrect.
Good point.
But CDT is too stubborn to do that, and a re-educated CDT is not a CDT anymore, it’s half-way towards becoming a TDT.
Perhaps. Although it’s not clear to me why CDT is allowed to notice that its mirror image does whatever it does, but not that its perfect copy does whatever it does.
And what about the “simulation uncertainty” argument? Is it valid or there’s a mistake somewhere?
You use all that is known about how events, including your own decision, depend on each other. Some of these dependencies can’t withstand your interventions, which are often themselves coming out of the error terms. In this way, EDT is the same as TDT, its errors originating from failure to recognize this effect of breaking correlations and (a flaw shared with CDT) from unwillingness in include abstract computations in the models. CDT, on the other hand, severs too many dependencies by using its causal graph surgery heuristic.
My correction of the problem statement makes sure that the dependence of Omega’s prediction on your decision is not something that can be broken by your decision, so graph surgery should spare it. (In CDT terms, both your decision and Omega’s prediction depend on your original state, and CDT mistakenly severs this dependence by thinking its decision uncaused.)
But when you make this correction, and then compare agents performance based on it, you should place the agents in the same situation, if the comparison is to be fair. In particular, the situation must be the same regarding the knowledge of this correction—knowledge that “the dependence of Omega’s prediction on your decision is not something that can be broken by your decision”. In a regular analysis here on LW of Newcomb’s problem, TDT receives an unfair advantage, in that it is given this knowledge while CDT is not, presumably because CDT cannot represent it.
But in fact it can—why not? If it means drawing causal arrows backwards in time, so what?
And in case of “pure” Newcomb’s problem, where the agent knows that Omega is 100% correct, even the backward causal arrows are not needed. I think. That was what my original comment was about, and so far no one answered...
The comparison doesn’t necessarily have to be fair, it only needs to accurately discern the fittest. A cat, for example, won’t even notice that an IQ test is presented before it, but that doesn’t mean that we have to make adjustments, that the conclusion is incorrect.
Updates are propagated in both directions, so you draw causal arrows only forwards in time, just don’t sever this particular arrow during standard graph surgery on a standard-ish causal graph, so that knowledge about your decision tells you something about its origins in the past, and then about the other effects of those origins on the present. But CDT is too stubborn to do that, and a re-educated CDT is not a CDT anymore, it’s half-way towards becoming a TDT.
Good point.
Perhaps. Although it’s not clear to me why CDT is allowed to notice that its mirror image does whatever it does, but not that its perfect copy does whatever it does.
And what about the “simulation uncertainty” argument? Is it valid or there’s a mistake somewhere?