You are basically saying that there is no way to know what you are going to do before you actually do it.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
If I decide to smoke but take no action, is there any problem?
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
It isn’t clear what you mean by “is there any problem?”
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
That’s fine, I never claimed anything like that.
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
You seem to be ignoring the deciding again. But in any case, I agree that causality and free will are irrelevant. I have been saying that all along.