On the Smoking Lesion, P(Cancer|Smoking) != P(Cancer), by hypothesis
Correct: P(Cancer|Smoking) > P(Cancer). When I said P(O|A) = P(O), I was using A to denote “the decision to not smoke, for the purpose of avoiding cancer.” And this is given by the hypothesis of Smoker’s lesion. The whole premise is that once we correct for the presence of the genetic lesion (which causes both love of smoking and cancer) smoking is not independently associated with cancer. This also suggests that once we correct for love of smoking, smoking is not independently associated with cancer. So if you know that your reason for (not) smoking has nothing to do with how much you like smoking, then the knowledge that you (don’t) smoke doesn’t make it seem any more (or less) likely that you’ll get cancer.
Unfortunately, “the decision to not smoke, for the purpose of avoiding cancer” and “the decision not to smoke, for any other reason” are not distinct actions. The actions available are simply “smoke” or “not smoke”. EDT doesn’t take prior information, like motives or genes, into account.
You can observe your preferences and hence take them into account.
Suppose that most people without the lesion find smoking disgusting, while most people with the lesion find it pleasurable. The lesion doesn’t affect your probability of smoking other than by affecting that taste.
The EDT says that should smoke if you find it pleasurable and you shouldn’t if you find it disgusting.
Figuring out what your options are is a hard problem for any decision theory, because it goes to the heart of what we mean by “could”. In toy problems like this, agents just have their options spoon-fed to them. I was trying to show that EDT makes the sensible decision, if it has the right options spoon-fed to it. This opens up at least the possibility that a general EDT agent (that figures out what its options are for itself) would work, because there’s no reason, in principle, why it can’t consider whether the statement “I decided to not smoke, for the purpose of avoiding cancer” would be good news or bad news.
Recognizing this as an option in the first place is a much more complicated issue. But recognizing “smoke” as an option isn’t trivial either. After all, you can’t smoke if there are no cigarettes available. So it seems to me that, if you’re a smoker who just found out about the statistics on smoking and cancer, then the relevant choice you have to make is whether to “decide to quit smoking based on this information about smoking and cancer.”
Correct: P(Cancer|Smoking) > P(Cancer). When I said P(O|A) = P(O), I was using A to denote “the decision to not smoke, for the purpose of avoiding cancer.” And this is given by the hypothesis of Smoker’s lesion. The whole premise is that once we correct for the presence of the genetic lesion (which causes both love of smoking and cancer) smoking is not independently associated with cancer. This also suggests that once we correct for love of smoking, smoking is not independently associated with cancer. So if you know that your reason for (not) smoking has nothing to do with how much you like smoking, then the knowledge that you (don’t) smoke doesn’t make it seem any more (or less) likely that you’ll get cancer.
Ah, I see.
Unfortunately, “the decision to not smoke, for the purpose of avoiding cancer” and “the decision not to smoke, for any other reason” are not distinct actions. The actions available are simply “smoke” or “not smoke”. EDT doesn’t take prior information, like motives or genes, into account.
You can observe your preferences and hence take them into account.
Suppose that most people without the lesion find smoking disgusting, while most people with the lesion find it pleasurable. The lesion doesn’t affect your probability of smoking other than by affecting that taste.
The EDT says that should smoke if you find it pleasurable and you shouldn’t if you find it disgusting.
Is that explicitly forbidden by some EDT axiom? It seems quite natural for an EDT agent to know its own motives for its decision.
Figuring out what your options are is a hard problem for any decision theory, because it goes to the heart of what we mean by “could”. In toy problems like this, agents just have their options spoon-fed to them. I was trying to show that EDT makes the sensible decision, if it has the right options spoon-fed to it. This opens up at least the possibility that a general EDT agent (that figures out what its options are for itself) would work, because there’s no reason, in principle, why it can’t consider whether the statement “I decided to not smoke, for the purpose of avoiding cancer” would be good news or bad news.
Recognizing this as an option in the first place is a much more complicated issue. But recognizing “smoke” as an option isn’t trivial either. After all, you can’t smoke if there are no cigarettes available. So it seems to me that, if you’re a smoker who just found out about the statistics on smoking and cancer, then the relevant choice you have to make is whether to “decide to quit smoking based on this information about smoking and cancer.”
(edited for clarity)