“Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should.”
But now assume that Susan suffers from painful anxiety proportional to her Bayesian estimate of the probability of her getting lung cancer. This anxiety plays a bigger role in her utility function than any enjoyment she might get from smoking. Should she still smoke?
Susan will have less anxiety if she doesn’t smoke, so doesn’t this mean she shouldn’t smoke? But when Susan is making the decision about smoking couldn’t she say to herself “whether I smoke will have no effect on the probability of my getting lung cancer, and since my brain makes a rational estimate of the probability of my getting lung cancer when deciding how much anxiety to dump on me, whether I smoke shouldn’t impact my level of anxiety, so I should smoke since I enjoy it? Clearly, if Susan flipped a coin to decide if she should smoke, her anxiety would be the same regardless of how the coin landed. Also, is this functionally the same as Newcomb’s problem?
If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.
I’ve been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it’s actually rude to respond like this unless you’re really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you’ve got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that’s not a property of the world we actually live in.
If I’m interpreting your comment correctly, you’re either stating that it’s not the case that people’s brains make rational probability estimates (which everybody on friggin’ LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I’m not sure what the benefits of your comment are.
Am I missing something that you and the upvoters saw in your comment?
Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect—‘Haha, the thought experiments we use are far divorced from the actual vagaries of human thought’. The fact I found it so hard to get this suggests to me that others probably didn’t get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)
I would guess that the common factor causes her to have an urge to smoke, and if she wants to find out whether she has cancer, it is irrelevant whether she actually smokes, she only has to see whether she has the urge. She has it, dang. Time to smoke.
You offer a reasonable interpretation, but the Smoking Lesion problem only becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If you know your source code, then whether you actually smoke cannot provide additional Bayesian information on whether you are likely to get lung cancer. Your decision is a logical consequence of your source code, and Bayesianism assumes logical omniscience. Humans in general don’t know their source code, but if Susan is using a formal decision theory to make this decision (which should be assumed since we’re asking what decision theory Susan should use), then she knows her source code for the purpose of making this decision.
If it’s not the urge, what is it? The decision algorithm? If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
I don’t think you’re taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
If it’s not the urge, what is it?
Obvious alternative that occurred to me in <5 seconds: It’s not the urge, it’s the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don’t show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said ‘the act of smoking’ in the comment to which you were replying!
becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Anyway, the smoking lesion problem isn’t confusing. It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
it’s the actual act of smoking or knowing one has smoked [that causes lung cancer]
Both of these are invalidated by the assumption:
She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer.
I’m not sure we disagree; as James said, whether one actually smokes provides information about lung cancer, which is possible regardless of whether smoking is the cause of the cancer. My comment was intended to be more general than causality.
But that would depend on other factors not just the probability of lung cancer. That depends on what her motivation is to smoke (relaxation, social partnerships, reducing her anxiety, stress management). In that case the temporary benefit gained from smoking may outweigh the Bayesian probability of her getting lung cancer which will take 20-30 to take hold (most likely) rather than using other means to mitigate her very present problems. If she is deciding to smoke or not based solely on the probability of getting lung cancer and her anxiety level about that the rational brain usually chooses to do things that reduce risk rather than increase it and if she is looking to reduce the anxiety level she should choose not to smoke because the chances of lung cancer derived from smoking go down. However, she could also use nicotine as a way to cope with her stress or anxiety which may provide a much better present relief and mitigate any anxiety she has about developing lung cancer in the future.
The Smoking Lesion problem is
“Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should.”
But now assume that Susan suffers from painful anxiety proportional to her Bayesian estimate of the probability of her getting lung cancer. This anxiety plays a bigger role in her utility function than any enjoyment she might get from smoking. Should she still smoke?
Susan will have less anxiety if she doesn’t smoke, so doesn’t this mean she shouldn’t smoke? But when Susan is making the decision about smoking couldn’t she say to herself “whether I smoke will have no effect on the probability of my getting lung cancer, and since my brain makes a rational estimate of the probability of my getting lung cancer when deciding how much anxiety to dump on me, whether I smoke shouldn’t impact my level of anxiety, so I should smoke since I enjoy it? Clearly, if Susan flipped a coin to decide if she should smoke, her anxiety would be the same regardless of how the coin landed. Also, is this functionally the same as Newcomb’s problem?
Generally, this is false.
If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.
I’ve been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it’s actually rude to respond like this unless you’re really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you’ve got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that’s not a property of the world we actually live in.
If I’m interpreting your comment correctly, you’re either stating that it’s not the case that people’s brains make rational probability estimates (which everybody on friggin’ LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I’m not sure what the benefits of your comment are.
Am I missing something that you and the upvoters saw in your comment?
Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect—‘Haha, the thought experiments we use are far divorced from the actual vagaries of human thought’. The fact I found it so hard to get this suggests to me that others probably didn’t get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)
I would guess that the common factor causes her to have an urge to smoke, and if she wants to find out whether she has cancer, it is irrelevant whether she actually smokes, she only has to see whether she has the urge. She has it, dang. Time to smoke.
You offer a reasonable interpretation, but the Smoking Lesion problem only becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If you know your source code, then whether you actually smoke cannot provide additional Bayesian information on whether you are likely to get lung cancer. Your decision is a logical consequence of your source code, and Bayesianism assumes logical omniscience. Humans in general don’t know their source code, but if Susan is using a formal decision theory to make this decision (which should be assumed since we’re asking what decision theory Susan should use), then she knows her source code for the purpose of making this decision.
If it’s not the urge, what is it? The decision algorithm? If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
I don’t think you’re taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
Obvious alternative that occurred to me in <5 seconds: It’s not the urge, it’s the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don’t show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said ‘the act of smoking’ in the comment to which you were replying!
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Anyway, the smoking lesion problem isn’t confusing. It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
I’m not interesting in discussing with this level of condescension.
Both of these are invalidated by the assumption:
I’m not sure we disagree; as James said, whether one actually smokes provides information about lung cancer, which is possible regardless of whether smoking is the cause of the cancer. My comment was intended to be more general than causality.
But that would depend on other factors not just the probability of lung cancer. That depends on what her motivation is to smoke (relaxation, social partnerships, reducing her anxiety, stress management). In that case the temporary benefit gained from smoking may outweigh the Bayesian probability of her getting lung cancer which will take 20-30 to take hold (most likely) rather than using other means to mitigate her very present problems. If she is deciding to smoke or not based solely on the probability of getting lung cancer and her anxiety level about that the rational brain usually chooses to do things that reduce risk rather than increase it and if she is looking to reduce the anxiety level she should choose not to smoke because the chances of lung cancer derived from smoking go down. However, she could also use nicotine as a way to cope with her stress or anxiety which may provide a much better present relief and mitigate any anxiety she has about developing lung cancer in the future.