Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?
No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don’t smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?
“Counterfactual probability”, in the way you mean it here, should not play a role in decisions where your decision is an effect of something else without taking that thing into account.
In other words, the counterfactual you are talking about is this: “If I could change the decision without the lesion changing, the probability of having the lesion is the same.”
That’s true, but entirely irrelevant to any reasonable decision, because the decision cannot be different without the lesion being different.
I’m denying CDT, but it is a mistake to equate CDT with Eliezer’s opinion anyway. CDT says you should two-box in Newcomb; Eliezer says you should one-box (and he is right about that.)
More specifically: you assert that in Newcomb, you cause Omega’s prediction. That’s wrong. Omega’s prediction is over and done with, a historical fact. Nothing you can do will change that prediction.
Instead, it is true that “Thinking AS THOUGH I could change Omega’s prediction will get good results, because I will choose to take one-box, and it will turn out that Omega predicted that.”
It is equally true that “Thinking AS THOUGH I could change the lesion will get good results, because I will choose not to smoke, and it will turn out that I did not have the lesion.”
In both cases your real causality is zero. In both cases thinking as though you can cause something has good results.
No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don’t smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?
“Counterfactual probability”, in the way you mean it here, should not play a role in decisions where your decision is an effect of something else without taking that thing into account.
In other words, the counterfactual you are talking about is this: “If I could change the decision without the lesion changing, the probability of having the lesion is the same.”
That’s true, but entirely irrelevant to any reasonable decision, because the decision cannot be different without the lesion being different.
So all you’re doing is denying CDT and asserting EDT is the only reasonable theory, like I thought.
I’m denying CDT, but it is a mistake to equate CDT with Eliezer’s opinion anyway. CDT says you should two-box in Newcomb; Eliezer says you should one-box (and he is right about that.)
More specifically: you assert that in Newcomb, you cause Omega’s prediction. That’s wrong. Omega’s prediction is over and done with, a historical fact. Nothing you can do will change that prediction.
Instead, it is true that “Thinking AS THOUGH I could change Omega’s prediction will get good results, because I will choose to take one-box, and it will turn out that Omega predicted that.”
It is equally true that “Thinking AS THOUGH I could change the lesion will get good results, because I will choose not to smoke, and it will turn out that I did not have the lesion.”
In both cases your real causality is zero. In both cases thinking as though you can cause something has good results.
I’m not equating them. TDT is CDT with some additional claims about causality for logical uncertainties.
You deny those claims, but causality doesn’t matter to you anyway, because you deny CDT.