In one there’s counterfactual dependence and in the other there isn’t. If your model doesn’t take into account counterfactuals then you can’t even tell the difference between smoking lesions and the case where smoking really does cause cancer.
Exactly. There is no difference; either way you should not smoke.
Also, what do you mean by saying that there is “counterfactual dependence” in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?
Those links explain it better than I can quickly, but I’ll try anyway: counterfactuals ask “if you reached into the universe from outside and changed A, what would happen?” Only things caused by A change, not things merely correlated with A.
I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.
(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)
So why did you ask me what I meant about counterfactuals? If you take the TDT assumption that identical copies of you counterfactually effect each other, then Newcomb has counterfactual dependence and Lesions doesn’t.
I don’t think there is any difference even with that assumption. Newcomb and the Lesion are entirely equivalent. Modify it to the situation discussed in the previous discussion of this topic. The Lesion case works like this: the lesion causes people to take two boxes, and the absence of the lesion causes people to take one box. The other parts are the same, except that Omega just checks whether you have the lesion in order to make his prediction. Then we have the two cases:
Regular Newcomb. I am a certain kind of algorithm, either one that is going to one-box, or one that is going to two-box.
Lesion Newcomb. I either have the lesion and am going to take both boxes, or I don’t and am going to take only one.
Regular Newcomb. Omega checks my algorithm and decides whether to put the million.
Lesion Newcomb. Omega checks the lesion and decides whether to put the million.
Regular Newcomb. I decide whether to take one or two boxes.
Lesion Newcomb. I decide whether to take one or two boxes.
Regular Newcomb. If I decided to take one box, it turns out that I had the one-boxing algorithm, that Omega predicted it, and I get the million. If I decided to take both boxes, the opposite occurs.
Lesion Newcomb. If I decided to take one box, it turns out that I did not have the lesion, Omega saw I did not, and I get the million. If I decided to take both boxes, it turns out that I had the lesion etc.
This is a simple case of substituting terms. The cases are identical.
Well it depends on what procedure omega uses: you can’t change the procedure and assert the same result obtains! If they predict you by simulating you, that creates a causal dependence, but not if they predict you by your genes or similar. You’re not accounting for the causal relationship in your comparison.
In the lesion case, I am assuming that the lesion has 100% chance of causing you to make a certain decision. If that is not assumed, we are not discussing the situation I am talking about.
So the causal process is like this:
Lesion exists.
Lesion causes certain thought process (e.g. “I really, really, want to smoke. And according to TDT, I should smoke, because smoking doesn’t cause cancer. So I think I will.”)
Thought process causes smoking and lesion causes cancer.
I just simulated the lesion process by thinking about it. Omega does the same thing; the details of 2 are irrelevant, as long as we know that the lesion will cause a thought process that will cause smoking.
In the lesion case, I am assuming that the lesion has 100% chance of causing you to make a certain decision.
Sure.
Omega does the same thing; the details of 2 are irrelevant, as long as we know that the lesion will cause a thought process that will cause smoking.
The details of 2 is irrelevant, but the details of how Omage works are relevant. If Omega checks for the lesion, then your choice has no counterfactual causal effect on Omega. If Omega simulates your mind, then your choice does have a counterfactual causal effect.
Lesion → thought process → choice.
TDT says choose as if you’re determining the outcome of your thought process. If Omega predicts from there, your optimal choice differs from when Omega predicts from Lesion.
So you’re saying that if Omega predicts from your thought process, you choose one-boxing or not smoking, but if Omega predicts directly from the lesion, you choose two-boxing or smoking?
I don’t see how that is relevant. The description I gave above still applies. If you choose one-boxing / not smoking, it turns out that you get the million and didn’t have the lesion. If you choose two-boxing / smoking, it turns out that you don’t get the million, and you had the lesion. This is true whether you followed the rule you suggest or any other. So if TDT recommends smoking when Omega predicts from the lesion, then TDT gives the wrong answer in that case.
If you choose one-boxing / not smoking, it turns out that you get the million and didn’t have the lesion. If you choose two-boxing / smoking, it turns out that you don’t get the million, and you had the lesion.
Well as I said above, this ignores causality. Of course if you ignore causality, you’ll get the EDT answers.
And if you define the right answer as the EDT answer, then whenever it differs from another decision theory you’ll think the other theory gets the wrong answer.
None of this is particularly interesting, and I already made these points above.
Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?
No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don’t smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?
“Counterfactual probability”, in the way you mean it here, should not play a role in decisions where your decision is an effect of something else without taking that thing into account.
In other words, the counterfactual you are talking about is this: “If I could change the decision without the lesion changing, the probability of having the lesion is the same.”
That’s true, but entirely irrelevant to any reasonable decision, because the decision cannot be different without the lesion being different.
I’m denying CDT, but it is a mistake to equate CDT with Eliezer’s opinion anyway. CDT says you should two-box in Newcomb; Eliezer says you should one-box (and he is right about that.)
More specifically: you assert that in Newcomb, you cause Omega’s prediction. That’s wrong. Omega’s prediction is over and done with, a historical fact. Nothing you can do will change that prediction.
Instead, it is true that “Thinking AS THOUGH I could change Omega’s prediction will get good results, because I will choose to take one-box, and it will turn out that Omega predicted that.”
It is equally true that “Thinking AS THOUGH I could change the lesion will get good results, because I will choose not to smoke, and it will turn out that I did not have the lesion.”
In both cases your real causality is zero. In both cases thinking as though you can cause something has good results.
In one there’s counterfactual dependence and in the other there isn’t. If your model doesn’t take into account counterfactuals then you can’t even tell the difference between smoking lesions and the case where smoking really does cause cancer.
Exactly. There is no difference; either way you should not smoke.
Also, what do you mean by saying that there is “counterfactual dependence” in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?
So you’re not just arguing with Eliezer, you’re arguing with the entirety of causal decision theory.
I strongly suspect you don’t understand causal decision theory at this point, or counterfactuals as used by it. If this is the case, see https://en.wikipedia.org/wiki/Causal_decision_theory, or http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/, or https://wiki.lesswrong.com/wiki/Causal_Decision_Theory
Those links explain it better than I can quickly, but I’ll try anyway: counterfactuals ask “if you reached into the universe from outside and changed A, what would happen?” Only things caused by A change, not things merely correlated with A.
I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.
(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)
So why did you ask me what I meant about counterfactuals? If you take the TDT assumption that identical copies of you counterfactually effect each other, then Newcomb has counterfactual dependence and Lesions doesn’t.
I’m not sure of your point here.
I don’t think there is any difference even with that assumption. Newcomb and the Lesion are entirely equivalent. Modify it to the situation discussed in the previous discussion of this topic. The Lesion case works like this: the lesion causes people to take two boxes, and the absence of the lesion causes people to take one box. The other parts are the same, except that Omega just checks whether you have the lesion in order to make his prediction. Then we have the two cases:
Regular Newcomb. I am a certain kind of algorithm, either one that is going to one-box, or one that is going to two-box.
Lesion Newcomb. I either have the lesion and am going to take both boxes, or I don’t and am going to take only one.
Regular Newcomb. Omega checks my algorithm and decides whether to put the million.
Lesion Newcomb. Omega checks the lesion and decides whether to put the million.
Regular Newcomb. I decide whether to take one or two boxes.
Lesion Newcomb. I decide whether to take one or two boxes.
Regular Newcomb. If I decided to take one box, it turns out that I had the one-boxing algorithm, that Omega predicted it, and I get the million. If I decided to take both boxes, the opposite occurs.
Lesion Newcomb. If I decided to take one box, it turns out that I did not have the lesion, Omega saw I did not, and I get the million. If I decided to take both boxes, it turns out that I had the lesion etc.
This is a simple case of substituting terms. The cases are identical.
Well it depends on what procedure omega uses: you can’t change the procedure and assert the same result obtains! If they predict you by simulating you, that creates a causal dependence, but not if they predict you by your genes or similar. You’re not accounting for the causal relationship in your comparison.
In the lesion case, I am assuming that the lesion has 100% chance of causing you to make a certain decision. If that is not assumed, we are not discussing the situation I am talking about.
So the causal process is like this:
Lesion exists.
Lesion causes certain thought process (e.g. “I really, really, want to smoke. And according to TDT, I should smoke, because smoking doesn’t cause cancer. So I think I will.”)
Thought process causes smoking and lesion causes cancer.
I just simulated the lesion process by thinking about it. Omega does the same thing; the details of 2 are irrelevant, as long as we know that the lesion will cause a thought process that will cause smoking.
Sure.
The details of 2 is irrelevant, but the details of how Omage works are relevant. If Omega checks for the lesion, then your choice has no counterfactual causal effect on Omega. If Omega simulates your mind, then your choice does have a counterfactual causal effect.
Lesion → thought process → choice.
TDT says choose as if you’re determining the outcome of your thought process. If Omega predicts from there, your optimal choice differs from when Omega predicts from Lesion.
So you’re saying that if Omega predicts from your thought process, you choose one-boxing or not smoking, but if Omega predicts directly from the lesion, you choose two-boxing or smoking?
I don’t see how that is relevant. The description I gave above still applies. If you choose one-boxing / not smoking, it turns out that you get the million and didn’t have the lesion. If you choose two-boxing / smoking, it turns out that you don’t get the million, and you had the lesion. This is true whether you followed the rule you suggest or any other. So if TDT recommends smoking when Omega predicts from the lesion, then TDT gives the wrong answer in that case.
Well as I said above, this ignores causality. Of course if you ignore causality, you’ll get the EDT answers.
And if you define the right answer as the EDT answer, then whenever it differs from another decision theory you’ll think the other theory gets the wrong answer.
None of this is particularly interesting, and I already made these points above.
When you say, “this ignores causality,” do you intend to make the opposite statements?
Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?
No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don’t smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?
“Counterfactual probability”, in the way you mean it here, should not play a role in decisions where your decision is an effect of something else without taking that thing into account.
In other words, the counterfactual you are talking about is this: “If I could change the decision without the lesion changing, the probability of having the lesion is the same.”
That’s true, but entirely irrelevant to any reasonable decision, because the decision cannot be different without the lesion being different.
So all you’re doing is denying CDT and asserting EDT is the only reasonable theory, like I thought.
I’m denying CDT, but it is a mistake to equate CDT with Eliezer’s opinion anyway. CDT says you should two-box in Newcomb; Eliezer says you should one-box (and he is right about that.)
More specifically: you assert that in Newcomb, you cause Omega’s prediction. That’s wrong. Omega’s prediction is over and done with, a historical fact. Nothing you can do will change that prediction.
Instead, it is true that “Thinking AS THOUGH I could change Omega’s prediction will get good results, because I will choose to take one-box, and it will turn out that Omega predicted that.”
It is equally true that “Thinking AS THOUGH I could change the lesion will get good results, because I will choose not to smoke, and it will turn out that I did not have the lesion.”
In both cases your real causality is zero. In both cases thinking as though you can cause something has good results.
I’m not equating them. TDT is CDT with some additional claims about causality for logical uncertainties.
You deny those claims, but causality doesn’t matter to you anyway, because you deny CDT.