Getting cancer largely depends on having the lesion or not. But the probability of getting cancer depends, not on the thing, but on the probability of having the lesion.
Let me quote your own post where you set up the problem:
90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.
This is the probability of getting cancer which depends on the “thing”, that is, the lesion. It does NOT depend on the probability of having a lesion.
90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer
That, you are saying, are frequencies and not probabilities. OK, let’s continue:
Let’s suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.
The probability of having the lesion given a random person … will be 50%, and the probability of not having the lesion will be 50%.
So why having a lesion (as a function of being a human in this particular population) is a probability and having cancer (as a function of having a lesion) is a frequency?
50% of the people have the lesion. That is a frequency. But if you pick a random person, that person either has the lesion or not. The probability, and not the frequency (which is not meaningful in the case of such an individual), that the random person has the lesion is 50%, because that is our expectation that the person has the lesion.
The parallel still holds. If you pick a random person with the lesion, he will either develop cancer or not. The probability that the random person with the lesion develops cancer is 90%. Is that not so?
“Pick a random person with the lesion” has more than one meaning.
If you pick a random person out of the whole population, then the probability that he will develop cancer is 45.5%. This is true even if he has the lesion, if you do not know that he has the lesion, since the probability is your estimate.
If you pick a random person out of the population of people who have the lesion (and therefore you already know who has the lesion), then the probability that he will develop cancer is 90%.
Basically you are simply pointing out that if you know if you have the lesion, you will be better off smoking. That is true. In the same way, if you know whether Omega put the million in the box or not, you will be better off taking both boxes. Of course since you are maintaining a consistent position, unlike the others here, that isn’t going to bother you.
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
Yes, I two-box (LW tends to treat it as a major moral failing X-D)
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
And that’s precisely what I disagree with.
The difference is between doing an intervention, that is, changing something in the outside world, and adjusting your estimate which changes nothing in the outside world. “Not smoking” will lead you to adjust your estimate, but it’s not an intervention.
If that’s precisely what you disagree with, can you provide an example where you give numerical estimates of your expected utility for the two choices? Since the condition is that you do not know which is the case, you cannot say “utility X if the lesion or no-million, utility Y if not.” You have to say “estimated utility for one choice: X. Estimated utility for other choice: Y.”
Given the terms of the problem, it it mathematically impossible to provide estimates where two boxing or smoking will be higher, without those estimates being provably biased.
Regarding the supposed intervention, choosing not to smoke is an intervention, and that is what changes your estimate, and therefore your expected utility.
can you provide an example where you give numerical estimates of your expected utility for the two choices?
I don’t think that utility functions are a useful approach to human decision-making. However in this context if you specify that smoking is pleasurable (and so provides +X utility), I would expect my utility in the I-choose-to-smoke case to be X higher than in the I-choose-not-to-smoke case.
Note, though, that I would have different utilities for the I-want-to-smoke and I-do-not-want-to-smoke cases.
choosing not to smoke is an intervention
No, it is not since smoking here is not a cause which affects your chances of cancer.
Utility functions are idealizations. So if someone suggests that I use a specific utility function, I will say, “No, thank you, I intend to remain real, not become an idealization.” But real objects are also not circular or square in a mathematical sense, and that does not prevent circles and squares from being useful in dealing with the real world. In the same way it can be useful to use utility functions, and especially when you are talking about situations which are idealized anyway, like the Smoking Lesion and Newcomb.
Your specific proposal will not work, if it is meant to give specific numbers (and maybe you didn’t intend it to anyway). For example, we know there is about an 85% chance you will get cancer if you smoke, and about a 5% chance that you will get cancer if you don’t, given the terms of the problem. So if not getting cancer has significantly more value than smoking, then it is impossible for your answer to work out numerically, without contradicting those proportions.
And that is what you are trying to do: basically you are assuming that your choice is not even correlated with getting cancer, not only that it is not the cause. But the terms of the problem stipulate that your choice is correlated.
“which affects your chances of cancer”
It most certainly does affect the chance that matters, which is your subjective estimate. I pointed out before that people would act in the same way even if they knew that determinism was true. If it was, the chance of everything, in your sense, would either be 100% or 0%, and nothing you ever did would be an intervention, in your sense. But you would do the same things anyway, which shows that what you care about and act on is your subjective estimate.
it is impossible for your answer to work out numerically
The answer you got is the answer. It is basically an assertion that one real number is bigger than another real number. What do you mean by “work out numerically”?
basically you are assuming that your choice is not even correlated with getting cancer
Incorrect. My choice is correlated, it’s just not causal.
It most certainly does affect the chance that matters, which is your subjective estimate.
So, here is where we disagree. I do not think my subjective estimate is “the chance that matters”. For example, what happens if my subjective estimate is mistaken?
people would act in the same way even if they knew that determinism was true
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
I will illustrate how your proposal will not work out mathematically. Let’s suppose your default utility is 150, the utility of smoking is 10, and the utility of cancer is negative 100, so that total utility will be as follows:
no smoking and no cancer: 150.
smoking and no cancer: 160.
no smoking and cancer: 50.
smoking and cancer: 60.
You say that you expect to get 10 more utility by smoking than by not smoking. It is easy to see from the above schema why someone would think that, but it is also mistaken. As I said, if you are using a utility function, you do not say, “X utility in this case, Y utility in that case,” but you just calculate an average utility that you expect overall if you make a certain choice. Of course you are free to reject the whole idea of using a utility function at all, as you already suggested, but if you accept the utility function framework for the sake of argument, your proposal will not work, as I am about to explain.
This is how we would calculate your expected utility:
Expected utility of smoking = 150 + 10 - (100 * probability of cancer).
Expected utility of not smoking = 150 - (100 * probability of cancer).
You would like to say that the probability of cancer is either 90% or 1%, depending on the lesion. But that gives you two different values each for smoking and for not smoking, and this does not fit into the expected utility framework. So we have to collapse this to a single probability in each formula (even if the probability in the smoking case might not be the same as in the non-smoking case). What is that probability?
We might say that the probability is 45.5% in both cases, since we know that over the whole population, this number of people will get cancer. In that case, we would get:
Expected utility of smoking = 114.5.
Expected utility of not smoking = 104.5.
This is what you said would happen. However, it is easy to prove that these cannot be unbiased estimates of your utility. We did not stipulate anything about you which is different from the general population, so if these are unbiased estimates, they should come out equal to the average utility of the people who smoke and of the people who do not smoke. But the actual averages are:
Average utility of smokers: 150 + 10 - (100 * .8555) = 74.45.
Average utility of non-smokers: 150 - (100 * .0545) = 144.55.
So why are your values different from these? The reason is that the above calculation takes the probability of 45.5% and leaves it as is, regardless of smoking, which effectively makes your choice an independent variable. In other words, as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer, but is an entirely independent variable. This is contrary to the terms of the problem.
Since your choice is correlated with the lesion and therefore also with cancer, the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
For example, what happens if my subjective estimate is mistaken?
You will likely get bad results. You can’t fix that by acting on something different from your subjective estimate, because if you think something else is truer than your subjective estimate, then make that your subjective estimate instead. Your subjective estimate matters not because it is automatically right, but because you don’t and can’t have anything which is more right.
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
Consider this situation. Someone is going to work every day to earn money to support himself. Then, one day someone convinces him that determinism is true.
Now maybe determinism is true, and maybe it isn’t. The point is that he is now convinced that it is. What do you expect to happen:
A) The person says, “Either I have 100% chance of starving to death, or a 0% chance. So why should I bother to go to work? It will not affect my chances. Even if I starve to death precisely because of not going to work, it will just mean there was a 100% chance of me not going to work in the first place. I still don’t have any intervention that can change my chances of starving.”
B) The person says, “I might starve if I quit work, but I will probably survive if I keep going to work. So I will keep going to work.”
Determinism as such is not inconsistent with either of these. It is true that if determinism is actually true, then whatever he does, he had a 100% chance of doing that. But there is nothing in the abstract picture to tell you which he is going to do. And in any case, I don’t need to assume that determinism is true. The question is what the person will do, who thinks it is true.
Most people, quite rightly, would expect the second thing to happen, and not the first. That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance. And if we would do the second thing ourselves, that implies that we are acting on subjective estimates and not on objective chances.
as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer
This is incorrect, as I pointed out a comment or two upthread.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
And will you also assert that you can change your expected utility by not smoking?
For example, what happens if my subjective estimate is mistaken?
You will likely get bad results.
Unroll this, please. What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance.
Huh? I don’t understand either why your example shows this or why do you think these two things are mutually exclusive opposites.
This is incorrect, as I pointed out a comment or two upthread.
I am explaining why it is correct. Basically you are saying that you cannot change the chance that you will get cancer. But your choice and cancer are correlated variables, so changing your choice changes the expected value of the cancer variable.
You seem to be thinking that it works like this: there are two rows of coins set so that the coin in each row is on the same side as the coin in the other row: when one is heads, the other is heads, and when one is tails the other is tails. Now if you go in and flip over one of the coins, the other will not flip. So the coins are correlated, but flipping one over will not change what the other coin is.
The problem with the coin case is that there is a pre-existing correlation and when you flip a coin, of course it will not flip the other. This means that flipping a coin takes away the correlation. But the correlation between your choice and cancer is a correlation with -your choice-, not with something that comes before your choice. So making a choice determines the expected value of the cancer variable, even if it cannot physically change whether you get cancer. If it did not, your choice would be taking away the correlation, just like you take away the correlation in the coin case. That is why I said you are implicitly assuming that your choice is not correlated with cancer: you are admitting that other people’s choices are correlated, and so are like the rows of coins sitting there, but you think your choice is something that comes later and will take away the correlation in your own case.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
I did not refuse to recognize such a distinction, although it is true that your estimate is part of the world, so updating your estimate is also changing the world. But the main point is that the estimate is what matters, not whether or not your action changes the world.
And will you also assert that you can change your expected utility by not smoking?
Yes. Before you decide whether to smoke or not, your expected utility is 109.5, because this is the average utility over the whole population. If you decide to smoke, your expected utility will become 74.45, and if you decide not to smoke, it will become 144.55. The reason this can happen is because “expected utility” is an expectation, which means that it is something subjective, which can be changed by the change in your estimate of other probabilities.
But note that it is a real expectation, not a fake one: if your expected utility is 144, you expect to get more utility than if your expected utility is 74. It would be an obvious contradiction to say that your expected utility is higher, but you don’t actually expect to get more.
What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
That depends in what direction your estimate is wrong. You personally would be more likely to get cancer in that situation, since you would mistakenly assume that smoking will not make it more likely that you would get cancer, and therefore you would not avoid smoking.
I don’t understand either why your example shows this or why you think these two things are mutually exclusive opposites.
The person who decides to stop going to work, does that because he cannot change the objective chance that he is going to starve to death. The person who decides to keep going to work has a subjective estimate that he is more likely to survive if he keeps going to work.
This is exactly parallel to the situation we are discussing. Consider a Deterministic Smoking Lesion: 100% of people with the lesion get cancer, no one else gets cancer, and 100% of the people with the lesion choose to smoke, and no one else chooses to smoke. By your way of arguing, it is still true that you cannot change whether you have the lesion or not, so you might as well smoke. That is exactly the same as the person who says that he might as well stop going to work. On the other hand, the person who decides to keep going to work is exactly the same as someone who says, “I cannot physically determine whether I have the lesion or not. However, if I choose not to smoke, I will be able to estimate that I do not have the lesion and will not get cancer. After choosing not to smoke, my subjective estimate of the probability of getting cancer will drop to 0%. So I will not smoke.”
Since we don’t seem to be getting anywhere on this level, let’s try digging deeper (please ignore the balrog superstitions).
Here we are talking about a “choice”. That word/concept is very important in this setup. Let’s dissect it.
I will assert that a great deal of confusion around the Smoking Lesion problem (and others related to it) arises out of the dual meaning attached to the concept of “choice”. There are actually two distinct things happening here.
Thing one is acquiring information. When you decide to smoke, this provides you with new, relevant information and so you update your probabilities and expected utilities accordingly. Note that for this you don’t have to do anything; you just learn, it’s passive acquisition of knowledge. Thing one is what you are focused on.
Thing two is acting, doing something in the physical world. When you decide to smoke, you grab a cigarette (or a pipe, or a cigar, or a blunt, or...) and take a drag. This is an action with potential consequences in reality. In the Smoking Lesion world your action does nothing (except give you a bit of utility) -- it’s not causal and does not change your cancer probabilities.
It is not hard to disassemble a single “choice” into its two components. Let’s stop at the moment of time when you already decided what to do but haven’t done anything yet. At this moment you have already acquired the information—you know what you want / what you have decided—but no action happened. If you don’t want to freeze time imagine the Smoking Lesion problem set on an island where there is absolutely nothing to smoke.
Here the “acquire information” component happened, but the “action” component did not. And does it make the problem easier? Sure, it makes it trivial: you just update on the new information, but there was no action and so we don’t have to concern ourselves with its effect (or lack of it), with causality, with free will, etc.
So I would suggest that the issues with Smoking Lesion are the result of conflating two different things in the single concept of “choice”. Disentangle them and the confusion should—hopefully? -- dissipate or at least lessen.
We can break it down, but I suggest a different scheme. There are three parts, not two. So:
At 1:00 PM, I have the desire to smoke.
At 2:00 PM, I decide to smoke.
At 3:00 PM, I actually smoke.
Number 3 is the action. The choice is number 2, and I will discuss that in a moment. But first, note that the #1 and #2 are not the same. This is clear for two reasons. First, smoking is worth 10 utility for everyone. So everyone has the same desire, but some people decide to smoke, and some people decide not to. Even in real life not everyone who has the desire decides to do it. Some people want it, but decide not to.
Second, when I said that the lesion is correlated with the choice, I meant it is correlated with number 2, not number 1. If it was correlated with number 1, you could say, “I have the desire to smoke. So I likely have the lesion. But I can go ahead and smoke; it won’t make cancer any more likely.” And that argument, in that situation, would be correct. That would be exactly the same as if you knew in advance whether or not you had the lesion. If you already know that, smoking will give you more utility. In the same way, in Newcomb, if you know whether or not the million is in the box before you choose, you should take both boxes.
The argument does not work when the correlation is with number 2, however, and we will see why in a moment.
Number 2 does not include the action (which is number 3), but it includes something besides information. It includes the plan of doing number 3, which plan is the direct cause of number 3. It also includes information, as you say, but you cannot have that information without also planning to do 3. Here is why. When you have the desire, you also have the information: “I have the desire.” And in the same way, when you start planning to smoke, you acquire the information, “I am now planning to smoke.” But you do NOT have that information before you start planning to smoke, since it is not even true until then.
When you are deciding whether to smoke or not, you do not yet have the information about whether you are planning to smoke or not, because you have no such plan yet. And you cannot get that information, without forming the plan at the same time.
The lesion is correlated with the plan. So when 2 happens, you form a plan. And you acquire some information, either “I am now planning to smoke,” or “I am now planning not to smoke.”
And that gives you additional information: either “very probably, I had the lesion an hour ago,” or “very probably, I did not have the lesion an hour ago.”
You suppose that this cannot happen, since either you have the lesion or not, from the beginning. But notice that “at 2:00 PM I start planning to smoke” and “at 2 PM I start planning not to smoke,” cannot co-exist in the same world. And since they only exist in different worlds, there should be nothing surprising about the fact that the past of those worlds is probably different.
I don’t see the point of your number 1. If, as you say, everyone has the desire then it contains no information and is quite irrelevant. I also don’t understand what drives the decision to smoke (or not) if everyone wants the same thing.
And you cannot get that information, without forming the plan at the same time.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them. Is there really the need for this hair-splitting here?
I could have left it out, but I included it in order to distinguish it from number 2, and because I suspected that you were thinking that the lesion was correlated with the desire. In that situation, you are right that smoking is preferable.
I also don’t understand what drives the decision to smoke (or not)
Consider what drives this kind of decision in reality. Some people desire alcohol and drink; some people desire it but do not drink. Normally this is because the ones who drink that it will be good overall, while the ones who don’t, think it will be bad overall.
In this case, we have something similar: people who think “smoking cannot change whether I have the lesion or not, so I might as well smoke” will probably plan to smoke, while people who think “smoking will increase my subjective estimate that I have the lesion,” will probably plan not to smoke.
Looking at this in more detail, consider again the Deterministic Smoking Lesion, where 100% of the people with the lesion choose to smoke, and no one else does. What is driving the decision in this case is obviously the lesion. But you can still ask, “What is going on in their minds when they make the decision?” And in that case it is likely that the lesion makes people think that smoking makes no difference, while not having the lesion lets them notice that smoking is a very bad idea.
In the case we were considering, there was a 95% correlation, not a 100% correlation. But a high correlation is on a continuum with the perfect correlation; just as the lesion is completely driving the decision in the 100% correlation case, it is mostly driving the decision in the 95% case. So basically the lesion tends to make people think like Lumifer, while not having the lesion tends to make people think like entirelyuseless.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them.
If you do that, obviously you not planning to carry out all of those plans, since they are different. You are considering them, not yet planning to do them. Number 2 is once you are sure about which one you plan to do.
You are basically saying that there is no way to know what you are going to do before you actually do it. I don’t find this to be a reasonable position.
Situations when this happens exist—typically they are associated with internal conflict and emotional stress—but they are definitely edge cases. In normal life your deliberate actions are planned (if only a few seconds beforehand) and you can reliably say what you are going to do just before you actually do it.
Humans possess reflection, the ability to introspect, and knowing what you are going to do almost always precedes actually doing it. I am not sure why do you want to keep on conflating knowing and doing.
You are basically saying that there is no way to know what you are going to do before you actually do it.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
If I decide to smoke but take no action, is there any problem?
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
It isn’t clear what you mean by “is there any problem?”
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
Let me quote your own post where you set up the problem:
This is the probability of getting cancer which depends on the “thing”, that is, the lesion. It does NOT depend on the probability of having a lesion.
“90% of the people” etc is a statement about frequencies, not probabilities.
Let’s look at the context.
You said
That, you are saying, are frequencies and not probabilities. OK, let’s continue:
So why having a lesion (as a function of being a human in this particular population) is a probability and having cancer (as a function of having a lesion) is a frequency?
50% of the people have the lesion. That is a frequency. But if you pick a random person, that person either has the lesion or not. The probability, and not the frequency (which is not meaningful in the case of such an individual), that the random person has the lesion is 50%, because that is our expectation that the person has the lesion.
The parallel still holds. If you pick a random person with the lesion, he will either develop cancer or not. The probability that the random person with the lesion develops cancer is 90%. Is that not so?
“Pick a random person with the lesion” has more than one meaning.
If you pick a random person out of the whole population, then the probability that he will develop cancer is 45.5%. This is true even if he has the lesion, if you do not know that he has the lesion, since the probability is your estimate.
If you pick a random person out of the population of people who have the lesion (and therefore you already know who has the lesion), then the probability that he will develop cancer is 90%.
Basically you are simply pointing out that if you know if you have the lesion, you will be better off smoking. That is true. In the same way, if you know whether Omega put the million in the box or not, you will be better off taking both boxes. Of course since you are maintaining a consistent position, unlike the others here, that isn’t going to bother you.
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
Yes, I two-box (LW tends to treat it as a major moral failing X-D)
And that’s precisely what I disagree with.
The difference is between doing an intervention, that is, changing something in the outside world, and adjusting your estimate which changes nothing in the outside world. “Not smoking” will lead you to adjust your estimate, but it’s not an intervention.
If that’s precisely what you disagree with, can you provide an example where you give numerical estimates of your expected utility for the two choices? Since the condition is that you do not know which is the case, you cannot say “utility X if the lesion or no-million, utility Y if not.” You have to say “estimated utility for one choice: X. Estimated utility for other choice: Y.”
Given the terms of the problem, it it mathematically impossible to provide estimates where two boxing or smoking will be higher, without those estimates being provably biased.
Regarding the supposed intervention, choosing not to smoke is an intervention, and that is what changes your estimate, and therefore your expected utility.
I don’t think that utility functions are a useful approach to human decision-making. However in this context if you specify that smoking is pleasurable (and so provides +X utility), I would expect my utility in the I-choose-to-smoke case to be X higher than in the I-choose-not-to-smoke case.
Note, though, that I would have different utilities for the I-want-to-smoke and I-do-not-want-to-smoke cases.
No, it is not since smoking here is not a cause which affects your chances of cancer.
Utility functions are idealizations. So if someone suggests that I use a specific utility function, I will say, “No, thank you, I intend to remain real, not become an idealization.” But real objects are also not circular or square in a mathematical sense, and that does not prevent circles and squares from being useful in dealing with the real world. In the same way it can be useful to use utility functions, and especially when you are talking about situations which are idealized anyway, like the Smoking Lesion and Newcomb.
Your specific proposal will not work, if it is meant to give specific numbers (and maybe you didn’t intend it to anyway). For example, we know there is about an 85% chance you will get cancer if you smoke, and about a 5% chance that you will get cancer if you don’t, given the terms of the problem. So if not getting cancer has significantly more value than smoking, then it is impossible for your answer to work out numerically, without contradicting those proportions.
And that is what you are trying to do: basically you are assuming that your choice is not even correlated with getting cancer, not only that it is not the cause. But the terms of the problem stipulate that your choice is correlated.
“which affects your chances of cancer”
It most certainly does affect the chance that matters, which is your subjective estimate. I pointed out before that people would act in the same way even if they knew that determinism was true. If it was, the chance of everything, in your sense, would either be 100% or 0%, and nothing you ever did would be an intervention, in your sense. But you would do the same things anyway, which shows that what you care about and act on is your subjective estimate.
The answer you got is the answer. It is basically an assertion that one real number is bigger than another real number. What do you mean by “work out numerically”?
Incorrect. My choice is correlated, it’s just not causal.
So, here is where we disagree. I do not think my subjective estimate is “the chance that matters”. For example, what happens if my subjective estimate is mistaken?
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
I will illustrate how your proposal will not work out mathematically. Let’s suppose your default utility is 150, the utility of smoking is 10, and the utility of cancer is negative 100, so that total utility will be as follows:
no smoking and no cancer: 150.
smoking and no cancer: 160.
no smoking and cancer: 50.
smoking and cancer: 60.
You say that you expect to get 10 more utility by smoking than by not smoking. It is easy to see from the above schema why someone would think that, but it is also mistaken. As I said, if you are using a utility function, you do not say, “X utility in this case, Y utility in that case,” but you just calculate an average utility that you expect overall if you make a certain choice. Of course you are free to reject the whole idea of using a utility function at all, as you already suggested, but if you accept the utility function framework for the sake of argument, your proposal will not work, as I am about to explain.
This is how we would calculate your expected utility:
Expected utility of smoking = 150 + 10 - (100 * probability of cancer).
Expected utility of not smoking = 150 - (100 * probability of cancer).
You would like to say that the probability of cancer is either 90% or 1%, depending on the lesion. But that gives you two different values each for smoking and for not smoking, and this does not fit into the expected utility framework. So we have to collapse this to a single probability in each formula (even if the probability in the smoking case might not be the same as in the non-smoking case). What is that probability?
We might say that the probability is 45.5% in both cases, since we know that over the whole population, this number of people will get cancer. In that case, we would get:
Expected utility of smoking = 114.5.
Expected utility of not smoking = 104.5.
This is what you said would happen. However, it is easy to prove that these cannot be unbiased estimates of your utility. We did not stipulate anything about you which is different from the general population, so if these are unbiased estimates, they should come out equal to the average utility of the people who smoke and of the people who do not smoke. But the actual averages are:
Average utility of smokers: 150 + 10 - (100 * .8555) = 74.45.
Average utility of non-smokers: 150 - (100 * .0545) = 144.55.
So why are your values different from these? The reason is that the above calculation takes the probability of 45.5% and leaves it as is, regardless of smoking, which effectively makes your choice an independent variable. In other words, as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer, but is an entirely independent variable. This is contrary to the terms of the problem.
Since your choice is correlated with the lesion and therefore also with cancer, the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
You will likely get bad results. You can’t fix that by acting on something different from your subjective estimate, because if you think something else is truer than your subjective estimate, then make that your subjective estimate instead. Your subjective estimate matters not because it is automatically right, but because you don’t and can’t have anything which is more right.
Consider this situation. Someone is going to work every day to earn money to support himself. Then, one day someone convinces him that determinism is true.
Now maybe determinism is true, and maybe it isn’t. The point is that he is now convinced that it is. What do you expect to happen:
A) The person says, “Either I have 100% chance of starving to death, or a 0% chance. So why should I bother to go to work? It will not affect my chances. Even if I starve to death precisely because of not going to work, it will just mean there was a 100% chance of me not going to work in the first place. I still don’t have any intervention that can change my chances of starving.”
B) The person says, “I might starve if I quit work, but I will probably survive if I keep going to work. So I will keep going to work.”
Determinism as such is not inconsistent with either of these. It is true that if determinism is actually true, then whatever he does, he had a 100% chance of doing that. But there is nothing in the abstract picture to tell you which he is going to do. And in any case, I don’t need to assume that determinism is true. The question is what the person will do, who thinks it is true.
Most people, quite rightly, would expect the second thing to happen, and not the first. That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance. And if we would do the second thing ourselves, that implies that we are acting on subjective estimates and not on objective chances.
This is incorrect, as I pointed out a comment or two upthread.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
And will you also assert that you can change your expected utility by not smoking?
Unroll this, please. What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
Huh? I don’t understand either why your example shows this or why do you think these two things are mutually exclusive opposites.
I am explaining why it is correct. Basically you are saying that you cannot change the chance that you will get cancer. But your choice and cancer are correlated variables, so changing your choice changes the expected value of the cancer variable.
You seem to be thinking that it works like this: there are two rows of coins set so that the coin in each row is on the same side as the coin in the other row: when one is heads, the other is heads, and when one is tails the other is tails. Now if you go in and flip over one of the coins, the other will not flip. So the coins are correlated, but flipping one over will not change what the other coin is.
The problem with the coin case is that there is a pre-existing correlation and when you flip a coin, of course it will not flip the other. This means that flipping a coin takes away the correlation. But the correlation between your choice and cancer is a correlation with -your choice-, not with something that comes before your choice. So making a choice determines the expected value of the cancer variable, even if it cannot physically change whether you get cancer. If it did not, your choice would be taking away the correlation, just like you take away the correlation in the coin case. That is why I said you are implicitly assuming that your choice is not correlated with cancer: you are admitting that other people’s choices are correlated, and so are like the rows of coins sitting there, but you think your choice is something that comes later and will take away the correlation in your own case.
I did not refuse to recognize such a distinction, although it is true that your estimate is part of the world, so updating your estimate is also changing the world. But the main point is that the estimate is what matters, not whether or not your action changes the world.
Yes. Before you decide whether to smoke or not, your expected utility is 109.5, because this is the average utility over the whole population. If you decide to smoke, your expected utility will become 74.45, and if you decide not to smoke, it will become 144.55. The reason this can happen is because “expected utility” is an expectation, which means that it is something subjective, which can be changed by the change in your estimate of other probabilities.
But note that it is a real expectation, not a fake one: if your expected utility is 144, you expect to get more utility than if your expected utility is 74. It would be an obvious contradiction to say that your expected utility is higher, but you don’t actually expect to get more.
That depends in what direction your estimate is wrong. You personally would be more likely to get cancer in that situation, since you would mistakenly assume that smoking will not make it more likely that you would get cancer, and therefore you would not avoid smoking.
The person who decides to stop going to work, does that because he cannot change the objective chance that he is going to starve to death. The person who decides to keep going to work has a subjective estimate that he is more likely to survive if he keeps going to work.
This is exactly parallel to the situation we are discussing. Consider a Deterministic Smoking Lesion: 100% of people with the lesion get cancer, no one else gets cancer, and 100% of the people with the lesion choose to smoke, and no one else chooses to smoke. By your way of arguing, it is still true that you cannot change whether you have the lesion or not, so you might as well smoke. That is exactly the same as the person who says that he might as well stop going to work. On the other hand, the person who decides to keep going to work is exactly the same as someone who says, “I cannot physically determine whether I have the lesion or not. However, if I choose not to smoke, I will be able to estimate that I do not have the lesion and will not get cancer. After choosing not to smoke, my subjective estimate of the probability of getting cancer will drop to 0%. So I will not smoke.”
Since we don’t seem to be getting anywhere on this level, let’s try digging deeper (please ignore the balrog superstitions).
Here we are talking about a “choice”. That word/concept is very important in this setup. Let’s dissect it.
I will assert that a great deal of confusion around the Smoking Lesion problem (and others related to it) arises out of the dual meaning attached to the concept of “choice”. There are actually two distinct things happening here.
Thing one is acquiring information. When you decide to smoke, this provides you with new, relevant information and so you update your probabilities and expected utilities accordingly. Note that for this you don’t have to do anything; you just learn, it’s passive acquisition of knowledge. Thing one is what you are focused on.
Thing two is acting, doing something in the physical world. When you decide to smoke, you grab a cigarette (or a pipe, or a cigar, or a blunt, or...) and take a drag. This is an action with potential consequences in reality. In the Smoking Lesion world your action does nothing (except give you a bit of utility) -- it’s not causal and does not change your cancer probabilities.
It is not hard to disassemble a single “choice” into its two components. Let’s stop at the moment of time when you already decided what to do but haven’t done anything yet. At this moment you have already acquired the information—you know what you want / what you have decided—but no action happened. If you don’t want to freeze time imagine the Smoking Lesion problem set on an island where there is absolutely nothing to smoke.
Here the “acquire information” component happened, but the “action” component did not. And does it make the problem easier? Sure, it makes it trivial: you just update on the new information, but there was no action and so we don’t have to concern ourselves with its effect (or lack of it), with causality, with free will, etc.
So I would suggest that the issues with Smoking Lesion are the result of conflating two different things in the single concept of “choice”. Disentangle them and the confusion should—hopefully? -- dissipate or at least lessen.
We can break it down, but I suggest a different scheme. There are three parts, not two. So:
At 1:00 PM, I have the desire to smoke.
At 2:00 PM, I decide to smoke.
At 3:00 PM, I actually smoke.
Number 3 is the action. The choice is number 2, and I will discuss that in a moment. But first, note that the #1 and #2 are not the same. This is clear for two reasons. First, smoking is worth 10 utility for everyone. So everyone has the same desire, but some people decide to smoke, and some people decide not to. Even in real life not everyone who has the desire decides to do it. Some people want it, but decide not to.
Second, when I said that the lesion is correlated with the choice, I meant it is correlated with number 2, not number 1. If it was correlated with number 1, you could say, “I have the desire to smoke. So I likely have the lesion. But I can go ahead and smoke; it won’t make cancer any more likely.” And that argument, in that situation, would be correct. That would be exactly the same as if you knew in advance whether or not you had the lesion. If you already know that, smoking will give you more utility. In the same way, in Newcomb, if you know whether or not the million is in the box before you choose, you should take both boxes.
The argument does not work when the correlation is with number 2, however, and we will see why in a moment.
Number 2 does not include the action (which is number 3), but it includes something besides information. It includes the plan of doing number 3, which plan is the direct cause of number 3. It also includes information, as you say, but you cannot have that information without also planning to do 3. Here is why. When you have the desire, you also have the information: “I have the desire.” And in the same way, when you start planning to smoke, you acquire the information, “I am now planning to smoke.” But you do NOT have that information before you start planning to smoke, since it is not even true until then.
When you are deciding whether to smoke or not, you do not yet have the information about whether you are planning to smoke or not, because you have no such plan yet. And you cannot get that information, without forming the plan at the same time.
The lesion is correlated with the plan. So when 2 happens, you form a plan. And you acquire some information, either “I am now planning to smoke,” or “I am now planning not to smoke.”
And that gives you additional information: either “very probably, I had the lesion an hour ago,” or “very probably, I did not have the lesion an hour ago.”
You suppose that this cannot happen, since either you have the lesion or not, from the beginning. But notice that “at 2:00 PM I start planning to smoke” and “at 2 PM I start planning not to smoke,” cannot co-exist in the same world. And since they only exist in different worlds, there should be nothing surprising about the fact that the past of those worlds is probably different.
I don’t see the point of your number 1. If, as you say, everyone has the desire then it contains no information and is quite irrelevant. I also don’t understand what drives the decision to smoke (or not) if everyone wants the same thing.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them. Is there really the need for this hair-splitting here?
I could have left it out, but I included it in order to distinguish it from number 2, and because I suspected that you were thinking that the lesion was correlated with the desire. In that situation, you are right that smoking is preferable.
Consider what drives this kind of decision in reality. Some people desire alcohol and drink; some people desire it but do not drink. Normally this is because the ones who drink that it will be good overall, while the ones who don’t, think it will be bad overall.
In this case, we have something similar: people who think “smoking cannot change whether I have the lesion or not, so I might as well smoke” will probably plan to smoke, while people who think “smoking will increase my subjective estimate that I have the lesion,” will probably plan not to smoke.
Looking at this in more detail, consider again the Deterministic Smoking Lesion, where 100% of the people with the lesion choose to smoke, and no one else does. What is driving the decision in this case is obviously the lesion. But you can still ask, “What is going on in their minds when they make the decision?” And in that case it is likely that the lesion makes people think that smoking makes no difference, while not having the lesion lets them notice that smoking is a very bad idea.
In the case we were considering, there was a 95% correlation, not a 100% correlation. But a high correlation is on a continuum with the perfect correlation; just as the lesion is completely driving the decision in the 100% correlation case, it is mostly driving the decision in the 95% case. So basically the lesion tends to make people think like Lumifer, while not having the lesion tends to make people think like entirelyuseless.
If you do that, obviously you not planning to carry out all of those plans, since they are different. You are considering them, not yet planning to do them. Number 2 is once you are sure about which one you plan to do.
You are basically saying that there is no way to know what you are going to do before you actually do it. I don’t find this to be a reasonable position.
Situations when this happens exist—typically they are associated with internal conflict and emotional stress—but they are definitely edge cases. In normal life your deliberate actions are planned (if only a few seconds beforehand) and you can reliably say what you are going to do just before you actually do it.
Humans possess reflection, the ability to introspect, and knowing what you are going to do almost always precedes actually doing it. I am not sure why do you want to keep on conflating knowing and doing.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
That’s fine, I never claimed anything like that.
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
You seem to be ignoring the deciding again. But in any case, I agree that causality and free will are irrelevant. I have been saying that all along.