I’m going to answer this with several comments, and probably not all today. In this one I am going to make some general points which are not necessarily directly addressed to particular comments you made, but which might show more clearly why I interpret the Smoking Lesion problem the way that I do, and in what sense I was discussing how the correlation comes about.
Eliezer used the Smoking Lesion as a counterexample to evidential decision theory. It it supposed to be a counterexample by providing a case where evidential decision theory recommends a bad course of action, namely not smoking when it would be better to smoke. He needed it as a counterexample because if there are no counterexamples, there is no need to come up with an alternative decision theory.
But the stipulation that evidential decision theory recommends not smoking requires us to interpret the situation in a very subtle way where it does not sound much like something that could happen in real life, rather than the crude way where we could easily understand it happening in real life.
Here is why. In order for EDT to recommmend not smoking, your actual credence that you have the lesion has to go up after you choose to smoke, and to go down after you choose not to smoke. That is, your honest evaluation of how likely you are to have the lesion has to change precisely because you made that choice.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say, “Look, I’ve been explaining for years that you should choose smoking in these cases. So my credence that I have the lesion won’t change one iota after I choose to smoke. I know perfectly well that my choice has nothing to do with whether I have the lesion; it is because I am living according to my principles.” But if this is true, then EDT does not recommend not smoking anyway in his case. It only recommends not smoking if he will actually believe himself more likely to have the lesion, once he has chosen to smoke, than he did before he made that choice. And that means that he has not found any counterexample to EDT yet.
The need to find a counterexample absolutely excludes any kind of crude causality. If the lesion is supposed to override your normal process of choice, so that for example you start smoking without any real decision, then deciding to smoke will not increase a person’s credence that he has the lesion. In fact it might decrease it, when he sees that he made a decision in a normal way.
In a similar way, there might be a statistical association between choosing to smoke and the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some other factor besides the choice, like desire for smoking. In order to have the counterexample, it has to be the case that as far as the person can tell, the correlation is directly between the lesion and the actual choice to smoke. This does not imply that any magic is happening—it refers to the state of the person’s knowledge. But he cannot have the ability to explain away the association so that his choice is clearly irrelevant; because if he does, EDT no longer recommends not smoking.
This is what I think is parallel to the fact that in Newcomb X’s causality is defined directly in relation to the choice of A or B, and makes the situations equivalent. In other words, I agree that in such an unusual situation EDT will recommend not smoking, but I disagree that there is anything wrong with that recommendation.
When Eliezer was originally discussing Newcomb, he posited a 100% correlation or virtually 100%, to make the situation more convincing. So if the Smoking Lesion is supposed to be a fair counterexample to EDT, we should do the same thing. So the best way to interpret the whole situation is like this:
The lesion has in the past had a 100% correlation with the actual choice to smoke, no matter how the particular person concluded that he should make that choice.
In every case, the person makes the choice in a manner which is psychologically normal. This is to ensure that it is not possible to remove the subjective correlation between actually choosing and the lesion, and consequent this stipulation is to prevent a person from not updating his credence based on his choice.
It cannot be said that these stipulations make the whole situation impossible, as long as we admit that a person’s choices, and also his mode of choice, are caused by the physical structure of the brain in any case. And even though they make the situation unlikely, this is no more the case than the equivalent stipulations in a Newcomb situation.
Nor can the response be that “you don’t have a real choice” in this situation. Even if we found out that this was true in some sense of choice, it would make no difference to the real experience of a person in this situation, which would be a normal experience of choice, and would be done in a normal manner and for normal reasons. On the contrary: you cannot get out of making a choice anymore than a determinist in real life has a realistic possibility of saying, “Now that I realize all my actions are determined, I don’t have to make choices anymore.”
EDT will indeed recommend not smoking in this situation, since clearly if you choose to smoke, you will conclude with high probability that you have the lesion, and if you choose not to smoke, you will conclude with high probability that you do not.
In order for Eliezer to have the counterexample, he needs to recommend smoking even in this situation. Presumably that would go something like this: “Look. I realize that after you follow my recommendation you will rightly conclude that you have the lesion. But we should ignore that, and consider it irrelevant, because we know that you can choose to smoke or not, while you cannot choose to have the lesion or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So choose to smoke, since you prefer that in theory to not smoking. It’s just too bad that you will have to conclude that you have the lesion.”
In my opinion this would be just as wrong as the following:
“Look. I realize that after you follow my recommendation you will rightly conclude that the million is not in the box. But we should ignore that, and consider it irrelevant, because we know that you can choose to take one or two boxes, while you cannot choose to make the million be there or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So take both boxes, since you would prefer the contents of both boxes to the contents of only one. It’s just too bad that you will have to conclude that the million isn’t there.”
Eliezer criticizes the “it’s just too bad” line of thinking by responding that you should strop trying to pretend it isn’t your fault, when you could have just taken one box. I say the same in the lesion case with the above stipulations: don’t pretend it isn’t your fault, when you could just decide not to smoke.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say [...]
In other words, for some highly atypical people who have given a lot of explicit thought to situations like the Smoking Lesion one (and who, furthermore, strongly reject EDT), deciding to smoke wouldn’t be evidence of having the lesion and therefore the SL situation for them doesn’t work as a counterexample to EDT. I think I agree, but I don’t see why it matters.
there might be a statistical association between choosing to smoke and having the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some factor besides the choice, such as desire for smoking.
Yes, I agree. Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
So if the Smoking Lesion is supposed to be a fair counterexample to CDT, we should do the same thing [sc. posit a very-near-100% correlation].
I’m not sure I follow the logic. Even a well-sub-100% Smoking Lesion situation is (allegedly) a counterexample to EDT, and it’s not necessary to push the correlation up to almost 100% for it to serve this purpose; the reason why you need a correlation near to 100% for Newcomb is that what makes it plausible (to the chooser in the Newcomb situation) that Omega really can predict his choices is exactly the fact that the correlation is so strong. If it were much weaker, the chooser would be entirely within his rights to say “My prior against Omega having any substantial predictive ability is extremely strong; no one has shown me the sort of evidence that would change my mind about that; so I don’t think my choosing to two-box is strong evidence that Omega will leave the second box empty; so I shall take both boxes.”
It’s not clear to me that anything parallel is true about the Smoking Lesion scenario, so I don’t see why we “should” push the correlations to practically-100% in that case.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
I’m not sure what TDT, or Eliezer, would say about your refined smoking-lesion situation. I will think a bit more about what I would say about it :-).
“For some highly atypical people...” The problem is that anyone who discusses this situation is a highly atypical person. And such people cannot imagine actually having a higher credence that they have the lesion, if they choose to smoke. This is why people advocate the smoking answer; and according to what I said in my other comment, it is not a “real Smoking Lesion problem” as long as they think that way, or at least they are not thinking of it as one (it could be that they are mistaken, and that they should have a higher credence, but don’t.)
Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
What I meant was: in the situations people usually think about, or at least the way they are thinking about them, EDT doesn’t necessarily say not to smoke. But these are not the situations that are equivalent to the real Newcomb problem—these are equivalent to the fake Newcomb situations. EDT does say not to smoke in the situations which are actually equivalent to Newcomb. When I said “TDT probably says not to smoke,” I was referring to the actually equivalent situations. (Although as I said, I am less confident about TDT now; it may simply be incoherent or arbitrary.)
You don’t need to have a 100% correlation either for Newcomb or for the Smoking Lesion. But you are right that the reason for a near 100% correlation for Newcomb is to make the situation convincing to the chooser. But this is just to get him to admit that the million will actually be more likely to be there if he takes only one box. In the same way, theoretically you do not need it for the Smoking Lesion. But again, you have to convince the chooser that he personally will have a higher chance of having the lesion if he chooses to smoke, and it is hard to convince people of that. As someone remarked about people’s attitude on one of the threads about this, “So the correlation goes down from 100% to 99.9% and suddenly you consider yourself one of the 0.1%?” If anything, it seems harder to convince people they are in the true Smoking Lesion situation, than in the true Newcomb situation. People find Newcomb pretty plausible even if the correlation is 90%, if it is both for one-boxers and two-boxers, but a 90% correlation in the lesion case would leave a lot of person’s opinions about whether they have the lesion unchanged, no matter whether they choose to smoke or not.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
I’m going to answer this with several comments, and probably not all today. In this one I am going to make some general points which are not necessarily directly addressed to particular comments you made, but which might show more clearly why I interpret the Smoking Lesion problem the way that I do, and in what sense I was discussing how the correlation comes about.
Eliezer used the Smoking Lesion as a counterexample to evidential decision theory. It it supposed to be a counterexample by providing a case where evidential decision theory recommends a bad course of action, namely not smoking when it would be better to smoke. He needed it as a counterexample because if there are no counterexamples, there is no need to come up with an alternative decision theory.
But the stipulation that evidential decision theory recommends not smoking requires us to interpret the situation in a very subtle way where it does not sound much like something that could happen in real life, rather than the crude way where we could easily understand it happening in real life.
Here is why. In order for EDT to recommmend not smoking, your actual credence that you have the lesion has to go up after you choose to smoke, and to go down after you choose not to smoke. That is, your honest evaluation of how likely you are to have the lesion has to change precisely because you made that choice.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say, “Look, I’ve been explaining for years that you should choose smoking in these cases. So my credence that I have the lesion won’t change one iota after I choose to smoke. I know perfectly well that my choice has nothing to do with whether I have the lesion; it is because I am living according to my principles.” But if this is true, then EDT does not recommend not smoking anyway in his case. It only recommends not smoking if he will actually believe himself more likely to have the lesion, once he has chosen to smoke, than he did before he made that choice. And that means that he has not found any counterexample to EDT yet.
The need to find a counterexample absolutely excludes any kind of crude causality. If the lesion is supposed to override your normal process of choice, so that for example you start smoking without any real decision, then deciding to smoke will not increase a person’s credence that he has the lesion. In fact it might decrease it, when he sees that he made a decision in a normal way.
In a similar way, there might be a statistical association between choosing to smoke and the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some other factor besides the choice, like desire for smoking. In order to have the counterexample, it has to be the case that as far as the person can tell, the correlation is directly between the lesion and the actual choice to smoke. This does not imply that any magic is happening—it refers to the state of the person’s knowledge. But he cannot have the ability to explain away the association so that his choice is clearly irrelevant; because if he does, EDT no longer recommends not smoking.
This is what I think is parallel to the fact that in Newcomb X’s causality is defined directly in relation to the choice of A or B, and makes the situations equivalent. In other words, I agree that in such an unusual situation EDT will recommend not smoking, but I disagree that there is anything wrong with that recommendation.
When Eliezer was originally discussing Newcomb, he posited a 100% correlation or virtually 100%, to make the situation more convincing. So if the Smoking Lesion is supposed to be a fair counterexample to EDT, we should do the same thing. So the best way to interpret the whole situation is like this:
The lesion has in the past had a 100% correlation with the actual choice to smoke, no matter how the particular person concluded that he should make that choice.
In every case, the person makes the choice in a manner which is psychologically normal. This is to ensure that it is not possible to remove the subjective correlation between actually choosing and the lesion, and consequent this stipulation is to prevent a person from not updating his credence based on his choice.
It cannot be said that these stipulations make the whole situation impossible, as long as we admit that a person’s choices, and also his mode of choice, are caused by the physical structure of the brain in any case. And even though they make the situation unlikely, this is no more the case than the equivalent stipulations in a Newcomb situation.
Nor can the response be that “you don’t have a real choice” in this situation. Even if we found out that this was true in some sense of choice, it would make no difference to the real experience of a person in this situation, which would be a normal experience of choice, and would be done in a normal manner and for normal reasons. On the contrary: you cannot get out of making a choice anymore than a determinist in real life has a realistic possibility of saying, “Now that I realize all my actions are determined, I don’t have to make choices anymore.”
EDT will indeed recommend not smoking in this situation, since clearly if you choose to smoke, you will conclude with high probability that you have the lesion, and if you choose not to smoke, you will conclude with high probability that you do not.
In order for Eliezer to have the counterexample, he needs to recommend smoking even in this situation. Presumably that would go something like this: “Look. I realize that after you follow my recommendation you will rightly conclude that you have the lesion. But we should ignore that, and consider it irrelevant, because we know that you can choose to smoke or not, while you cannot choose to have the lesion or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So choose to smoke, since you prefer that in theory to not smoking. It’s just too bad that you will have to conclude that you have the lesion.”
In my opinion this would be just as wrong as the following:
“Look. I realize that after you follow my recommendation you will rightly conclude that the million is not in the box. But we should ignore that, and consider it irrelevant, because we know that you can choose to take one or two boxes, while you cannot choose to make the million be there or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So take both boxes, since you would prefer the contents of both boxes to the contents of only one. It’s just too bad that you will have to conclude that the million isn’t there.”
Eliezer criticizes the “it’s just too bad” line of thinking by responding that you should strop trying to pretend it isn’t your fault, when you could have just taken one box. I say the same in the lesion case with the above stipulations: don’t pretend it isn’t your fault, when you could just decide not to smoke.
In other words, for some highly atypical people who have given a lot of explicit thought to situations like the Smoking Lesion one (and who, furthermore, strongly reject EDT), deciding to smoke wouldn’t be evidence of having the lesion and therefore the SL situation for them doesn’t work as a counterexample to EDT. I think I agree, but I don’t see why it matters.
Yes, I agree. Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
I’m not sure I follow the logic. Even a well-sub-100% Smoking Lesion situation is (allegedly) a counterexample to EDT, and it’s not necessary to push the correlation up to almost 100% for it to serve this purpose; the reason why you need a correlation near to 100% for Newcomb is that what makes it plausible (to the chooser in the Newcomb situation) that Omega really can predict his choices is exactly the fact that the correlation is so strong. If it were much weaker, the chooser would be entirely within his rights to say “My prior against Omega having any substantial predictive ability is extremely strong; no one has shown me the sort of evidence that would change my mind about that; so I don’t think my choosing to two-box is strong evidence that Omega will leave the second box empty; so I shall take both boxes.”
It’s not clear to me that anything parallel is true about the Smoking Lesion scenario, so I don’t see why we “should” push the correlations to practically-100% in that case.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
I’m not sure what TDT, or Eliezer, would say about your refined smoking-lesion situation. I will think a bit more about what I would say about it :-).
“For some highly atypical people...” The problem is that anyone who discusses this situation is a highly atypical person. And such people cannot imagine actually having a higher credence that they have the lesion, if they choose to smoke. This is why people advocate the smoking answer; and according to what I said in my other comment, it is not a “real Smoking Lesion problem” as long as they think that way, or at least they are not thinking of it as one (it could be that they are mistaken, and that they should have a higher credence, but don’t.)
What I meant was: in the situations people usually think about, or at least the way they are thinking about them, EDT doesn’t necessarily say not to smoke. But these are not the situations that are equivalent to the real Newcomb problem—these are equivalent to the fake Newcomb situations. EDT does say not to smoke in the situations which are actually equivalent to Newcomb. When I said “TDT probably says not to smoke,” I was referring to the actually equivalent situations. (Although as I said, I am less confident about TDT now; it may simply be incoherent or arbitrary.)
You don’t need to have a 100% correlation either for Newcomb or for the Smoking Lesion. But you are right that the reason for a near 100% correlation for Newcomb is to make the situation convincing to the chooser. But this is just to get him to admit that the million will actually be more likely to be there if he takes only one box. In the same way, theoretically you do not need it for the Smoking Lesion. But again, you have to convince the chooser that he personally will have a higher chance of having the lesion if he chooses to smoke, and it is hard to convince people of that. As someone remarked about people’s attitude on one of the threads about this, “So the correlation goes down from 100% to 99.9% and suddenly you consider yourself one of the 0.1%?” If anything, it seems harder to convince people they are in the true Smoking Lesion situation, than in the true Newcomb situation. People find Newcomb pretty plausible even if the correlation is 90%, if it is both for one-boxers and two-boxers, but a 90% correlation in the lesion case would leave a lot of person’s opinions about whether they have the lesion unchanged, no matter whether they choose to smoke or not.
This is correct.