In the Newcomb case, there’s a disagreement about whether one-boxing can actually somehow cause there to be a million dollars in the box; CDT denies this possibility (because it takes no account of sufficiently accurate predictors), while timeless/logical/functional/whatever decision theories accept it.
To be clear, FDT does not accept causation that happens backwards in time. It’s not claiming that the action of one-boxing itself causes there to be a million dollars in the box. It’s the agent’s algorithm, and, further down the causal diagram, Omega’s simulation of this algorithm that causes the million dollars. The causation happens before the prediction and is nothing special in that sense.
Yes, sure. Indeed we don’t need to accept causation of any kind, in any temporal direction. We can simply observe that one-boxers get a million dollars, and two-boxers do not. (In fact, even if we accept shminux’s model, this changes nothing about what the correct choice is.)
To be clear, FDT does not accept causation that happens backwards in time. It’s not claiming that the action of one-boxing itself causes there to be a million dollars in the box. It’s the agent’s algorithm, and, further down the causal diagram, Omega’s simulation of this algorithm that causes the million dollars. The causation happens before the prediction and is nothing special in that sense.
Yes, sure. Indeed we don’t need to accept causation of any kind, in any temporal direction. We can simply observe that one-boxers get a million dollars, and two-boxers do not. (In fact, even if we accept shminux’s model, this changes nothing about what the correct choice is.)
Eh? This kind of reasoning leads to failing to smoke on Smoking Lesion.