You’re correct that if the correlation were known to be 100% then the only meaningful advice one could give would be not to smoke. However, it’s important to understand that “100% correlation” is a degenerate case of the Smoking Lesion problem, as I’ll try to explain:
Imagine a problem of the following form: Y is a variable under our control, which we can either set to k or -k for some k >= 0 (0 is not ruled out). X is an N(0, m^2) random variable which we do not observe, for some m >= 0 (again, 0 is not ruled out). Our payoff has the form (X + Y) − 1000(X + rY) for some constant r with 0 ⇐ r ⇐ 1. Working out the optimal strategy is rather trivial. But anyway, in the edge cases: If r = 0 we should put Y = k and if r = 1 we should put Y = -k.
Now I want to say that the case r = 0 is analogous to the Smoking Lesion problem and the case r = 1 is analogous to Newcomb’s problem (with a flawless predictor):
Y corresponds to the part of a person’s will that they exercise conscious control over.
X corresponds to the part of a person’s will that just does whatever it does in spite of the person’s best intentions.
The ratio k : m measures “the extent to which we have free will.”
The (X + Y) term is the ‘temptation’, analogous to the extra $1000 in the second box or the pleasure gained from smoking.
The −1000(X + rY) is the ‘punishment’, analogous to the loss of the $1000000 in the first box, or getting cancer.
The constant r measures the extent to which even the ‘free part’ of our will is visible to whoever or whatever decides whether to punish us. (For simplicity, we take for granted that the ‘unfree part’ of our will is visible, but this needn’t always be the case.)
In the case of Newcomb’s Problem (with a perfect predictor) we have r = 1 and so the X term becomes irrelevant—we may as well simply treat the player as being ‘totally in control of their decision’.
In the case of the Smoking Lesion problem, the ‘Lesion’ is just some predetermined physiological property over which the player exercises no control. Whether the player smokes and whether they get cancer are conditionally independent given the presence or absence of the Lesion. This corresponds to r = 0 (and note that the problem wouldn’t even make sense unless m > 0). But then the only way to have 100% correlation between ‘temptation’ and ‘punishment’ is to put k = 0 so that the person’s will is ‘totally unfree’. But if the person’s will is ‘totally unfree’ then it doesn’t really make sense to treat them a decision-making agent.
ETA: Perhaps this analogy can be developed into an analysis of the original problems. One way to do it would be to define random variables Z and W taking values 0, 1 such that log(P(Z = 1 | X and Y)) / log(P(Z = 0 | X and Y)) = a linear combination of X and Y (and likewise for W, but with a different linear combination), and then have Z be the “player’s decision” and W be “Omega’s decision / whether person gets cancer”. But I think the ratio of extra work to extra insight would be quite high.
You’re correct that if the correlation were known to be 100% then the only meaningful advice one could give would be not to smoke. However, it’s important to understand that “100% correlation” is a degenerate case of the Smoking Lesion problem, as I’ll try to explain:
Imagine a problem of the following form: Y is a variable under our control, which we can either set to k or -k for some k >= 0 (0 is not ruled out). X is an N(0, m^2) random variable which we do not observe, for some m >= 0 (again, 0 is not ruled out). Our payoff has the form (X + Y) − 1000(X + rY) for some constant r with 0 ⇐ r ⇐ 1. Working out the optimal strategy is rather trivial. But anyway, in the edge cases: If r = 0 we should put Y = k and if r = 1 we should put Y = -k.
Now I want to say that the case r = 0 is analogous to the Smoking Lesion problem and the case r = 1 is analogous to Newcomb’s problem (with a flawless predictor):
Y corresponds to the part of a person’s will that they exercise conscious control over.
X corresponds to the part of a person’s will that just does whatever it does in spite of the person’s best intentions.
The ratio k : m measures “the extent to which we have free will.”
The (X + Y) term is the ‘temptation’, analogous to the extra $1000 in the second box or the pleasure gained from smoking.
The −1000(X + rY) is the ‘punishment’, analogous to the loss of the $1000000 in the first box, or getting cancer.
The constant r measures the extent to which even the ‘free part’ of our will is visible to whoever or whatever decides whether to punish us. (For simplicity, we take for granted that the ‘unfree part’ of our will is visible, but this needn’t always be the case.)
In the case of Newcomb’s Problem (with a perfect predictor) we have r = 1 and so the X term becomes irrelevant—we may as well simply treat the player as being ‘totally in control of their decision’.
In the case of the Smoking Lesion problem, the ‘Lesion’ is just some predetermined physiological property over which the player exercises no control. Whether the player smokes and whether they get cancer are conditionally independent given the presence or absence of the Lesion. This corresponds to r = 0 (and note that the problem wouldn’t even make sense unless m > 0). But then the only way to have 100% correlation between ‘temptation’ and ‘punishment’ is to put k = 0 so that the person’s will is ‘totally unfree’. But if the person’s will is ‘totally unfree’ then it doesn’t really make sense to treat them a decision-making agent.
ETA: Perhaps this analogy can be developed into an analysis of the original problems. One way to do it would be to define random variables Z and W taking values 0, 1 such that log(P(Z = 1 | X and Y)) / log(P(Z = 0 | X and Y)) = a linear combination of X and Y (and likewise for W, but with a different linear combination), and then have Z be the “player’s decision” and W be “Omega’s decision / whether person gets cancer”. But I think the ratio of extra work to extra insight would be quite high.