I would guess that the common factor causes her to have an urge to smoke, and if she wants to find out whether she has cancer, it is irrelevant whether she actually smokes, she only has to see whether she has the urge. She has it, dang. Time to smoke.
You offer a reasonable interpretation, but the Smoking Lesion problem only becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If you know your source code, then whether you actually smoke cannot provide additional Bayesian information on whether you are likely to get lung cancer. Your decision is a logical consequence of your source code, and Bayesianism assumes logical omniscience. Humans in general don’t know their source code, but if Susan is using a formal decision theory to make this decision (which should be assumed since we’re asking what decision theory Susan should use), then she knows her source code for the purpose of making this decision.
If it’s not the urge, what is it? The decision algorithm? If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
I don’t think you’re taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
If it’s not the urge, what is it?
Obvious alternative that occurred to me in <5 seconds: It’s not the urge, it’s the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don’t show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said ‘the act of smoking’ in the comment to which you were replying!
becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Anyway, the smoking lesion problem isn’t confusing. It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
it’s the actual act of smoking or knowing one has smoked [that causes lung cancer]
Both of these are invalidated by the assumption:
She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer.
I’m not sure we disagree; as James said, whether one actually smokes provides information about lung cancer, which is possible regardless of whether smoking is the cause of the cancer. My comment was intended to be more general than causality.
I would guess that the common factor causes her to have an urge to smoke, and if she wants to find out whether she has cancer, it is irrelevant whether she actually smokes, she only has to see whether she has the urge. She has it, dang. Time to smoke.
You offer a reasonable interpretation, but the Smoking Lesion problem only becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If you know your source code, then whether you actually smoke cannot provide additional Bayesian information on whether you are likely to get lung cancer. Your decision is a logical consequence of your source code, and Bayesianism assumes logical omniscience. Humans in general don’t know their source code, but if Susan is using a formal decision theory to make this decision (which should be assumed since we’re asking what decision theory Susan should use), then she knows her source code for the purpose of making this decision.
If it’s not the urge, what is it? The decision algorithm? If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
I don’t think you’re taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
Obvious alternative that occurred to me in <5 seconds: It’s not the urge, it’s the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don’t show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said ‘the act of smoking’ in the comment to which you were replying!
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Anyway, the smoking lesion problem isn’t confusing. It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
I’m not interesting in discussing with this level of condescension.
Both of these are invalidated by the assumption:
I’m not sure we disagree; as James said, whether one actually smokes provides information about lung cancer, which is possible regardless of whether smoking is the cause of the cancer. My comment was intended to be more general than causality.