Another possible rebuttal, even retaining the analytical meaning of the word choose, is to complete the demonstration appealing to the “if I can” in point 5 with:
6a—since you cannot choose free will, then it doesn’t exist.
So, for the argument to be complete, a believer would have to add:
6b—I choose to believe in free will.
But then you have to show that you actually chose conclusion 6b instead of simply having reached it, which I don’t think can be done.
Richard uses “choose” in this way, but I don’t. I don’t think choosing means you necessarily have libertarian free will.
In any case I am not sure this is a great rebuttal, since someone can say, “maybe it’s choosing, maybe it’s not. But at least I am going to try to choose.”
I don’t think choosing means you necessarily have libertarian free will.
Neither do I, but since it’s just words, we can be gracious and grant the usage in the limited context of the philosophical discussion. Although your formulation is way clearer.
“maybe it’s choosing, maybe it’s not. But at least I am going to try to choose.”
Well, anyone can do as anyone pleases, but the argument rested on choosing, not reaching the conclusion, so: if it is, show it, if it’s not, the argument is invalid.
It is a practical argument. It is not meant to prove that people have libertarian free will. It is not even meant to show that it is probable. It is meant to show that “trying to convince yourself to believe that you have libertarian free will is a good idea.”
That is why I brought up the badness of being wrong about that, because it is an argument about what is good to do, not an argument about what is true. This response applies to your other comment as well.
It is meant to show that “trying to convince yourself to believe that you have libertarian free will is a good idea.”
I feel that the crux of the argument shows that choosing (in the analytical sense) libertarian free will is a good idea, but only if you can. It lacks the part where also reaching the conclusion of libertarian free will is a good idea.
It doesn’t need to say that reaching the conclusion is good. It just has to show that trying as hard as you can to reach that conclusion is good. The argument is meant to be that making that an attempt has a potential upside (succeeding when you actually have libertarian free will) and no potential downside (because if you succeed and the conclusion is false, it is not your fault.)
That’s wrong because not being your fault does not mean it is not a downside, just as it is a downside of smoking in the smoking lesion, if you get cancer, even if it is not your fault.
and no potential downside (because if you succeed and the conclusion is false, it is not your fault.)
That is the part of the argument that is missing from the original formulation, and assuming it I think does a disservice to your analysis and the original argument too.
I think that the argument is not so much that if you succeed in incorrectly convincing yourself that you have (libertarian) free will, it is not your fault. Instead, I think the argument is that success in willfully convincing yourself that you have free will (or convincing yourself of anything else, for that matter) implies that you have free will. If you didn’t have free will, then you did not really willfully convince yourself of anything—instead, your belief (or lack thereof) in free will is just something that happened.
Sure, but the question is why you should try to convince yourself of libertarian free will, instead of trying to convince yourself of the opposite. If you succeed in the first case, it shows you are right, but if you succeed in the second, it shows you are wrong.
So, I don’t see the flaw in the argument. Clearly the argument doesn’t really demonstrate that we have free will, but I don’t think that it is intended to do that. It does seem to make the case that if you want to be right about free will, you should try to convince yourself that you have free will.
That depends. If you think that you should take both boxes in Newcomb, and that you should smoke in the Smoking Lesion, then you are consistent in also thinking that you should try to convince yourself that you have free will. But if you disagree with one of them but not all, your position is inconsistent.
I disagree with all three, and the argument is implied in my other post about Newcomb and the lesion. In particular, in the case of convincing yourself, the fact that it would be bad to believe something false, is a reason not to convince yourself (unless the evidence supports it) even if it is merely something that happens, just like cancer is a reason not to smoke even though it would be just something that happens.
I am, per your criteria, consistent. Per Newcomb, I’ve always been a two-boxer. One of my thoughts about Newcomb was nicely expressed in a recent posting by Lumifer.
Per the smoking lesion—as a non-smoker with no desire to smoke and a belief that smoking causes cancer, I’ve never gotten past fighting the hypothetical. However, I just now made the effort and realized that within the hypothetical world of the smoking lesion, I would choose to smoke.
And, I think the argument in favor of trying to convince yourself that you have free will has merit. I do have a slight concern about the word “libertarian” in your formulation of the argument, which is why I omitted it or included it parenthetically. My concern is that under a compatibilist conception of free will, it would be possible to willfully convince yourself of something even if determinism is true. But, if you remove the word “libertarian”, it seems reasonable that a person interested in arriving at truth should attempt to convince himself/herself that he/she has free will.
ETA:
In the parent post, you said:
That depends. If you think that you should take both boxes in Newcomb, and that you should smoke in the Smoking Lesion, then you are consistent in also thinking that you should try to convince yourself that you have free will. But if you disagree with one of them but not all, your position is inconsistent.
Which is it? Is the argument bad or is it only inconsistent with one-boxing and not smoking?
It seems to me that even if we accept your argument that to be consistent one must either two-box, smoke, and try to convince oneself that one has free will, or one box, not smoke, and not try to convince oneself that one has free will, you still have not made the case that the arguments in favor of two-boxing, smoking or trying to convince oneself that one has fee will are bad arguments.
Intellectually I have more respect for someone who holds a consistent position on these things than someone who holds an inconsistent position. The original point was a bit ad hominem, as most people on LW were maintaining Eliezer’s inconsistent position (one-boxing and smoking).
However, if we speak of good and bad in terms of good and bad results, all three positions (two-boxing, smoking, and convincing yourself of something apart from evidence) are bad in that they have bad results (no million, cancer, and potentially believing something false.) In that particular sense you would be better off with an inconsistent position, since you would get good results in one or more of the cases.
I thought I did, sort of, make the case for that in the post on the Alien Implant and the comments on that. It’s true that it’s not much of a case, since it is basically just saying, “obviously this is good and that’s bad,” but that’s how it is. Here is Scott Alexander with a comment making the case:
How can I make this clearer...okay. Let’s say there have been a trillion Calvinists throughout history. They’ve all been rationalists and they’ve all engaged in this same argument. Some of them have been pro-sin for the same reasons you are, others have been pro-virtue for the same reasons I am. Some on each side have changed their minds after having listened to the arguments. And of all of these trillion Calvinists, every single one who after all the arguments decides to live a life of virtue—has gone to Heaven. And every single one who, after all the arguments, decides to live a life of sin—has gone to Hell.
To say that you have no reason to change your mind here seems to be suggesting that there’s a pretty good chance you will be the exception to a rule that has held 100% of the time in previous cases: the sinful Calvinist who goes to Heaven, or the virtuous Calvinist who goes to Hell. If this never worked for a trillion people in your exact position, why do you think it will work for you now?
In other words, once the correlation is strong enough, the fact that you know for sure that something bad will happen if you make that choice, is enough reason not to make the choice, despite your reasoning about causality.
And once you realize that this is true, you will realize that it can be true even when the correlation is less than 100%, although the effect size will be smaller.
However, if we speak of good and bad in terms of good and bad results, all three positions (two-boxing, smoking, and convincing yourself of something apart from evidence) are bad in that they have bad results (no million, cancer, and potentially believing something false.)
Not really—one of the main points of the smoking lesion is that smoking doesn’t cause cancer. It seems to me that to choose not to smoke is to confuse correlation with causation—smoking and cancer are, in the hypothetical world of the smoking lesion, highly correlated but neither causes the other. To think that opting not to smoke has a health benefit in the world of the smoking lesion is to engage in magical thinking.
Similarly, Newcomb may be an interesting way of thinking about precommitments and decision theories for AGIs, but the fact remains that Omega has made its choice already—your choice now doesn’t affect what’s in the box. Nozick’s statement of Newcomb is not asking if you want to make some sort of precommitment—it is asking you what you want to do after Omega has done whatever Omega has done and has left the scene. Nothing you do at that point can affect the contents of the boxes.
And, willfully choosing to convince yourself that you have free will and then succeeding in doing so cannot possibly lead one astray for the obvious reason that if you don’t have free will, you can’t willfully choose to do anything. If you willfully choose to convince yourself that you have free will then you have free will.
In other words, once the correlation is strong enough, the fact that you know for sure that something bad will happen if you make that choice, is enough reason not to make the choice, despite your reasoning about causality.
In the original statement of the smoking lesion, we don’t know for sure that something bad will happen if we smoke. It states that smoking is “strongly correlated with lung cancer, not that the correlation is 100%. And, even if the past correlation between A and B was 100%, there is no reason to assume that the future correlation will be* 100%, particularly if A does not cause B.
And once you realize that this is true, you will realize that it can be true even when the correlation is less than 100%, although the effect size will be smaller.
The only reason a high correlation is meaningful input into a decision is because it suggests a possible causal relationship. Once you understand the causal factors, correlation no longer provides any additional relevant information.
I am not confusing correlation and causation. I am saying that correlation is what matters, and causation is not.
To think that opting not to smoke has a health benefit in the world of the smoking lesion is to engage in magical thinking.
It would be, if you thought that not smoking caused you not to get cancer. But that is not what I think. I think that you will be less likely to get cancer, via correlation. And I think being less likely to get cancer is better than being more likely to get it.
Nozick’s statement of Newcomb is not asking if you want to make some sort of precommitment—it is asking you what you want to do after Omega has done whatever Omega has done and has left the scene. Nothing you do at that point can affect the contents of the boxes.
I agree, and the people here arguing that you have a reason to make a precommitment now to one-box in Newcomb are basically distracting everyone from the real issue. Take the situation where you do not have a precommitment. You never even thought about the problem before, and it comes on you by surprise.
You stand there in front of the boxes. What is your estimate of the chance that the million is there?
Now think to yourself: suppose I choose to take both. Before I open them, what will be my estimate of the chance the million is there?
And again think to yourself: suppose I choose to take only one. Before I open them, what will be my estimate of the chance the million is there?
You seem to me to be suggesting that all three estimated chances should be the same. And I am not telling you what to think about this. If your estimates are the same, fine. And in that case, I entirely agree that it is better to take both boxes.
I say it is better to take one box if and only if your estimated chances are different for those cases, and your expected utility based on the estimates will be greater using the estimate that comes after choosing to take one box.
Do you disagree with that? That is, if we assume for the sake of argument that your estimates are different, do you still think you should always take both? Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.
This is why I am saying that correlation matters, not causation.
The only reason a high correlation is meaningful input into a decision is because it suggests a possible causal relationship. Once you understand the causal factors, correlation no longer provides any additional relevant information.
This is partly true, but what you don’t seem to realize is that the direction of the causal relationship does not matter. That is, the reason you are saying this is that e.g. if you think that a smoking lesion causes cancer, then choosing to smoke will not make your estimate of the chances you will get cancer any higher than if you choose not to smoke. And in that case, your estimates do not differ. So I agree you should smoke in such a case. But—if the lesion is likely to cause you to engage in that kind of thinking and go through with it, then choosing to smoke should make your estimate of the chance that you have the lesion higher, because it is likely that the reason you are being convinced to smoke is that you have the lesion. And in that case, if the chance increases enough, you should not smoke.
I am saying that correlation is what matters, and causation is not.
I do not understand why you think that (I suspect the point of this thread is to explain why, but in spite of that, I do not understand).
You seem to me to be suggesting that all three estimated chances should be the same.
Yes, that is what I am saying.
Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.
No. Nowhere in Nozick’s original statement of Newcomb’s problem is any indication that Omega is omniscient to be found. All Nozick states regarding Omega’s prescience is that you have “enormous confidence” in the being’s power to predict, and that the being has a really good track record of making predictions in the past. Over the years, the problem has morphed in the heads of at least some LWers such that Omega has something resembling divine foreknowledge; I suspect that this is the reason behind at least some LWers opting to “one box”.
But—if the lesion is likely to cause you to engage in that kind of thinking and go through with it, then choosing to smoke should make your estimate of the chance that you have the lesion higher, because it is likely that the reason you are being convinced to smoke is that you have the lesion.
Yes, I agree with that – choosing to smoke provides evidence that you have the lesion.
And in that case, if the chance increases enough, you should not smoke.
No. The fact that you have chosen to smoke may provide evidence that you have the lesion, but it does not increase the chances that you will get cancer. Think of this example:
Suppose that 90% of people with the lesion get cancer, and 99% of the people without the lesion do not get cancer.
Suppose that you have the lesion. In this case the probability that you will get cancer is .9, independent of whether or not you smoke.
Now, suppose that you do not have the lesion. In this case the probability that you will get cancer is .01, independent of whether or not you smoke.
You clearly either have the lesion or do not have the lesion. That was determined long before you made a choice about smoking, and your choice to or not to smoke does not change whether or not you have the lesion.
So, since the probability of a person with the lesion to get cancer is unaffected by his/her choice to smoke (it is .9), and the probability of a person without the lesion to get cancer is likewise unaffected by his/her choice to smoke (it is .01), then if you want to smoke you ought to go ahead and smoke; it isn’t going to affect the likelihood of your getting cancer (albeit your health insurance rates will likely go up, since it provides evidence that you have the lesion and will likely get cancer).
You agreed that you are saying that the three estimated chances are the same. That is not consistent with admitting that your choices are evidence (at least for you) one way or another—if they are evidence, then your estimate should change depending on which choice you make.
Look at Newcomb in the way you wanted to look at the lesion. Either the million is in the box or it is not.
Let’s suppose that you look at the past cases and it was there some percentage of the time. We can assume 40% for concreteness. Suppose you therefore estimate that there is a 40% chance that the million is there.
Suppose you decide to take both. What is your estimate, before you check, that the million is there?
Again, suppose you decide to take one. What is your estimate, before you check, that the million is there?
You seem to me to be saying that the estimate should remain fixed at 40%. I agree that if it does, you should take both. But this is not consistent with saying that your choice (in the smoking case) provides evidence you have the lesion; this would be equivalent to your choice to take one box being evidence that the million is there.
We don’t have to make Omega omniscient for there to be some correlation. Suppose that 85% of the people who chose one box found the million, but because many people took both, the total percentage was 40%. Are you arguing in favor of ignoring the correlation, or not? After you decide to take the one box, and before you open it, do you think the chance the million is there is 40%, or 85% or something similar?
I am saying that a reasonable person would change his estimate to reflect more or less the previous correlation. And if you do, when I said “you may be certain,” I was simply taking things to an extreme. We do not need that extreme. If you think the million is more likely to be there, after the choice to take the one, than after the choice to take both, and if this thinking is reasonable, then you should take one and not both.
You agreed that you are saying that the three estimated chances are the same. That is not consistent with admitting that your choices are evidence (at least for you) one way or another—if they are evidence, then your estimate should change depending on which choice you make.
Mea culpa, I was inconsistent. When I was thinking of Newcomb, my rationale was that I already know myself well enough to know that I am a “two-boxing” kind of person, so actually deciding to two-box does not really provide (me) any additional evidence. I could have applied the same logic in the smoking lesion – surly the fact that I want to smoke is already strong evidence that I have the lesion and actually choosing to smoke does not provide additional evidence.
In fact, in both cases, actually choosing to “one box” or “two box”, or to smoke or not to smoke, does provide evidence to an outside observer (hence my earlier quip about choosing to smoke will cause your insurance rates to increase) , and may provide new evidence to the one making the choice, depending on his/her introspective awareness (if he/she is already very in touch with his/her thoughts and preferences then actually making the choice may not provide him/her much more in the way of evidence).
However, whether or not my choice provides me evidence is a red herring. It seems to me that you are confusing the idea of increasing or decreasing your confidence that a thing will (or did) happen with the idea of increasing or decreasing the probability that it actually will (or did) happen. These two things are not the same, and in the case of the smoking lesion hypothetical, you should not smoke only if smoking increases the probability of actually getting cancer – merely increasing your assessment of the likelihood that you will get cancer is not a good reason to not smoke.
Similarly, even if choosing to open both boxes increases your expectation that Omega put nothing in the second box, the choice did not change whether or not Omega actually did put nothing in the second box.
We don’t have to make Omega omniscient for there to be some correlation. Suppose that 85% of the people who chose one box found the million, but because many people took both, the total percentage was 40%. Are you arguing in favor of ignoring the correlation, or not?
Yes, I am arguing in favor of ignoring the correlation. Correlation is not causation. Omega’s choice has already been made – nothing that I do now will change what’s in the second box.
It seems to me that you are confusing the idea of increasing or decreasing your confidence that a thing will (or did) happen with the idea of increasing or decreasing the probability that it actually will (or did) happen.
While I do think those are the same thing as long as your confidence is reasonable, I am not confusing anything with anything else, and I understand what you are trying to say. It just is not relevant to decision making, where what is relevant is your assessment of things.
In other words, from my point of view, “the probability a thing will happen” just is your reasonable assessment, not an objective feature of the world.
Suppose we found out that determinism was true: given the initial conditions of the universe, one particular result necessarily follows with 100% probability. If we consider “the probability a thing will happen” as an objective feature of the world, then in this situation, everything has a probability of 100% or 0%, as an objective feature. Consequently, by your method of decision making, it does not matter what you do, ever; because you never change the probability that a thing will actually happen, but only your assessment of the probability.
Obviously, though, if we found out that determinism was true, we would not suddenly stop caring about our decisions; we would keep making them in the same way as before. And what information would we be using? We would obviously be using our assessment of the probability that a result would follow, given a certain choice. We could not be using the objective probabilities since we could not change them by any decision.
So if we would use that method if we found out that determinism was true, we should use that method now.
Again, every time I brought up the idea of a perfect correlation, you simply fought the hypothetical instead of addressing it. And this is because in the situation of a perfect correlation, it is obvious that what matters is the correlation and not causation: in Scott Alexander’s case, if you know that living a sinful life has 100% correlation with going to hell, that is absolutely a good reason to avoid living a sinful life, even though it does not change the objective probability that you will go to hell (which would be either 100% or 0%).
When you choose an action, it tells you a fact about the world: “I was a person who would make choice A” or “I was a person who would make choice B.” And those are different facts, so you have different information in those cases. Consider the Newcomb case. You take two boxes, and you find out that you are a person who would take two boxes (or if you already think you would, you become more sure of this.) If you took only one box, you would instead find out that you were a person who would take one box. In the case of perfect correlation, it would be far better to find out you were a person who take one, than a person who would take two; and likewise even if the correlation is very high, it would be better to find out that you are a person who would take one.
You answer, in effect, that you cannot make yourself into a person who would take one or two, but this is already a fixed fact about the world. I agree. But you already know for certain that if you take one, you will learn that you are a person who would take one, and if you take both, you will learn that you are a person who would take both. You will not make yourself into that kind of person, but you will learn it nonetheless. And you already know which is better to learn, and therefore which you should choose.
The same is true about the lesion: it is better to learn that you do not have the lesion, than that you do, or even that you most likely do not have it, rather than learning that you probably have it.
You stated that you think that the idea of increasing or decreasing your confidence that a thing will (or did) happen and the idea of increasing or decreasing the probability that it actually will (or did) happen are “the same thing as long as your confidence is reasonable”. I disagree with the idea that the probability that a thing actually will (or did) happen is the same as your confidence that a thing will (or did) happen, as illustrated by these examples:
John’s wife died under suspicious circumstances. You are a detective investigating the death. You suspect John killed his wife. Clearly, John either did or did not kill his wife, and presumably John knows which of these is the case. However, as a detective, as you uncover each new piece of evidence, you will adjust your confidence that John killed his wife either up or down, depending on whether the evidence supports or refutes the idea that John killed his wife. However, the evidence does not change the fact of what actually happened – it just changes your confidence in your assessment that John killed his wife. This example is like the Newcomb example – Omega either did or did not put $1M in the second box – any evidence that you obtain based on your choice to one box or two box may change your assessment of the likelihood, but it does not affect the reality of the matter.
Suppose I put 900 black marbles and 100 white marbles in an opaque jar and mix them more-less uniformly. I now ask you to estimate the probability that a marble selected blindly from the jar will be white, and then to actually remove a marble, examine it and replace it. This is repeated a number of times, and each time the marble is replaced, the contents of the jar is mixed. Suppose that due to luck, your first four picks yield two white marbles and two black marbles. You will probably assess the likelihood of the next marble being white at or around .5. However, after an increasing number of trials, your estimate will begin to converge on .1. However, the actual probability has been .1 all along – what has changed is your assessment of the probability. This is like the smoking lesion hypothetical where your decision to smoke may increase your assessment of the probability that you will get cancer, but does not affect the actual probability that you will get cancer.
In other words, from my point of view, “the probability a thing will happen” just is your reasonable assessment, not an objective feature of the world.
In both the examples listed above, there is an objective reality (John either did or did not kill his wife, and the probability of selecting a white marble is .1), and there is your confidence that John killed his wife, and your estimation of the probability of selecting a white marble. These things all exist, and they are not the same.
Again, every time I brought up the idea of a perfect correlation, you simply fought the hypothetical instead of addressing it.
You brought up the idea of omniscience when you said:
Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.
and I addressed it by pointing out that omniscience is not a part of Newcomb. Perfect correlation likewise is not a part of the smoking lesion. Perfect correlation of past trials is an aspect of the Newcomb problem, but perfect correlation of past trials is not really qualitatively different from a merely high correlation, as it does not imply that “you may be certain you will get the million if you take one box, and certain that you will not, if you take both” in the same way that flipping a coin 6 times in a row and getting heads each time does not imply that you will forever more get heads each time you flip that coin. I did consider perfect correlation of past trials in the Newcomb problem, because it is built in to Nozick’s statement of the problem. And, perfect correlation of past trials in the smoking lesion, while not part of the smoking lesion as originally stated, does not change my decision to smoke.
I was not fighting the hypothetical when I stated that omniscience is not part of Newcomb – I merely pointed out that you changed the hypothetical; a Newcomb with an omniscient Omega is a different problem than the one proposed by Nozick. I am sticking with Nozick’s and Egan’s hypotheticals.
It is true that I did not address Yvain’s predestination example. I did not find it to be relevant because Calvinist predestination involves actual predeterminism and omniscience, neither of which is anywhere suggested by Nozick. In short, Yvain has invented a new, different hypothetical; if we can’t agree on Newcomb, I don’t see how adding another hypothetical into the mix helps.
I have stated my position with the most succinct example that I can think of, and you have not addressed that example. The example was:
Suppose that 90% of people with the lesion get cancer, and 99% of the people without the lesion do not get cancer.
Suppose that you have the lesion. In this case the probability that you will get cancer is .9, independent of whether or not you smoke.
Now, suppose that you do not have the lesion. In this case the probability that you will get cancer is .01, independent of whether or not you smoke.
You clearly either have the lesion or do not have the lesion. That was determined long before you made a choice about smoking, and your choice to or not to smoke does not change whether or not you have the lesion.
So, since the probability of a person with the lesion to get cancer is unaffected by his/her choice to smoke (it is .9), and the probability of a person without the lesion to get cancer is likewise unaffected by his/her choice to smoke (it is .01), then if you want to smoke you ought to go ahead and smoke; it isn’t going to affect the likelihood of your getting cancer.
A similar example can be made for two-boxing:
You are either a person whom Omega thinks will two-box or you are not. Based on Omega’s assessment it either will or will not place $1M in box two.
Only after Omega has done this will it make its offer to you.
Your choice to one-box or two-box may change your assessment as to whether Omega has placed $1M in box two, but it does not change whether Omega actually has placed $1M in box two.
If Omega placed $1M in box two, your expected utility (measured in $) is:
$1M if you one-box,
$1.001M if you two-box
If Omega did not place $1M in box two, your expected utility is:
$0 if you one-box
$1K if you two-box
Your choice to one-box vs two box does not change whether Omega did or did not put $1M in box two; Omega had already done that before you ever made your choice.
Therefore, since your expected utility is higher when you two-box regardless of what Omega did, you should two-box.
I don’t know that I can explain my position any more clearly than that; I suspect that if we are still in disagreement, we should simply agree to disagree (regardless of what Aumann might say about that :) ). After all, Nozick stated in his paper that it is quite difficult to obtain consensus on this problem:
To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.
Also, I do agree with your original point – Newcomb and the smoking lesion are equivalent in that similar reasoning that would lead one to one-box would likewise lead one to not smoke, and similar reasoning that would lead one to two-box would lead one to smoke.
I did not disagree that you can talk about “actual probabilities” in the way that you did. I said they are irrelevant to decision making, and I explained that using the example of determinism. This is also why I did not comment on your detailed scenario; because it uses the “actual probabilities” in the way which is not relevant to decision making.
Let me look at that in detail. In your scenario, 90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.
Let’s suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.
The probability of having the lesion given a random person (and it doesn’t matter whether you call this an actual probability or a subjective assessment—it is the number of people) will be 50%, and the probability of not having the lesion will be 50%.
Your argument that you should smoke if you want does not consider the correlation between having the lesion and smoking, of course because you consider this correlation irrelevant. But it is not irrelevant, and we can see that by seeing what happens when he consider it.
Suppose 95% of people with the lesion choose to smoke, and 5% of the people with the lesion choose not to smoke. Similarly, suppose 95% of the people without the lesion choose not to smoke, and 5% of the people without the lesion choose to smoke.
Given these stipulations it follows that 50% of the people smoke, and 50% do not.
For a random person, the total probability of getting cancer will be 45.5%. This is an “actual” probability: 45.5% of the total people will actually get cancer. This is just as actual as the probability of 90% that a person with the lesion gets it. If you pick a random person with the lesion, 90% of such random choices will get cancer; and if you pick a random person from the whole group, 45.5% of such random choices will get cancer.
Before you choose, therefore, your estimated probability of getting cancer will be 45.5%. You seem to admit that you could have this estimated probability, but want to say that the “real” probability is either 90% or 1%, depending on whether you have the lesion. But in fact all the probabilities are equally real, depending on your selection process.
What you ignored is the probability that you will get cancer given that you smoke. From the above stipulations, it follows of necessity that 85.55% of people choosing to smoke will get cancer, and 5.45% of people choosing not to smoke will get it. You say that this changes your estimate but not the “real probability.” But this probability is quite real: it is just as true that 85.55% of smokers will get cancer, as that 90% of people with the lesion will.
This is the situation I originally described, except not as extreme. If you smoke, you will be fairly sure (and with a calibrated judgement of probability) that you will get cancer, and if you do not, you will be fairly sure that you will not get cancer.
Let’s look at this in terms of calculating an expected utility. You suggest such a calculation in the Newcomb case, where you get more expected utility by taking two boxes, whether or not Omega put the million there. In the same way, in the smoking case, you think you will get more utility by smoking, whether or not you have the lesion. But notice that you are calculating two different values, one in case you have the lesion or the million, and one where you don’t. In real life you have to act without knowing whether the lesion or the million is there. So you have to calculate an overall expected utility.
What would that be? It is easy to see that it is impossible to calculate an unbiased estimate of your expected utility which says overall that you will get more by taking two boxes or by smoking. This is absolutely necessary, because on average the people who smoke get less utility, and the people who take two boxes also get less utility, if there is a significant correlation between Omega’s guess and people’s actions.
Let’s try it anyway. Let’s say the overall odds of the million being there are 50⁄50, just like we had 50⁄50 odds of the lesion being there. According to you, your expected utility from taking two boxes will be $501,000, calculating your expectation from the “real” probabilities. And your expected utility from taking one box will be $500,000.
But it is easy to see that it is mathematically impossible for those to be unbiased estimates if there is some correlation between the person’s choice and Omega’s guess. E.g. if 90% of the people that are guessed to be one-boxers, also take just one box, and 90% of the people that are guessed to be two-boxers, also take two boxes, then the average utility from taking one box will be $900,000, and the average utility from taking both will be $100,100.90. These are “actual” utilities, that is, they are the average that those people really get. This proves definitively that estimates that say that you will get more by taking two are necessarily biased estimates.
But, you will say, your argument shows that you absolutely must get more by taking both. So what went wrong? What went wrong, is that your argument implicitly assumed that there is no correlation between your choice and what is in the box, or in the smoking case, whether you have the lesion. But this is false by the statement of the problem.
It is simply a mathematical necessity from the statement of the problem that your expected utility will be higher by one boxing and by not smoking (given a high enough discrepancy in utilities and high enough correlations). This is why I said that correlation matters, not causation.
But in fact all the probabilities are equally real, depending on your selection process.
This is not so.
You are confused between two kinds of uncertainty (and so, probability): the uncertainty of the actual outcome in the real, physical world, and the uncertainty of some agent not knowing the outcome.
For a random person, the total probability of getting cancer will be 45.5%.
Let’s unroll this. The actual probability for a random person to get cancer is either 90% or 1%. You just don’t know which one of these two numbers applies, so you produce an estimate by combining them. Your estimate doesn’t change anything in the real world and someone else—e.g. someone who has access to the lesion-scanning results for this random person—would have a different estimate.
Note, by the way, the difference between speaking about a “random person” and about the whole population. For the population as a whole, the 45.5% value is correct: out of 1000 people, about 455 will get cancer. But for a single person it is not correct: a single person has either a 90% actual probability or a 1% actual probability.
For simplicity consider an urn containing an equal number of white and black balls. You would say that a “random ball” has a 50% chance of being black—but each ball is either black or white, it’s not 50% of anything. 50% of the entire set of balls is black, true, but each ball’s state is not uncertain and is not subject to (“actual”) probability.
The actual probability for a random person to get cancer is either 90% or 1%. You just don’t know which
“You just don’t know which” is what probability is. The “actual” probability is the probability conditional on all the information we actually have, namely 45.5%; 90% or 1% would be the probability if, contrary to the fact, we also knew whether the person has the lesion.
The problem does not say that any physical randomness is involved. The 90% of those with the lesion may be determined by entirely physical and determinate causes. And in that case, the 90% is just as “actual” or “not actual”, whichever you prefer, as the 45.5% of the population who get cancer, or as the 85.55% of smokers who get cancer.
Second, physical randomness is irrelevant anyway, because the only way it would make a difference to your choice would be by making you subjectively uncertain of things. As I said in an earlier comment, if we knew for certain that determinism was true, we would make our choices in the same way we do now. So the only uncertainty that is relevant to decision making is subjective uncertainty.
I don’t want to get into the swamp of discussing the philosophy of uncertainty/probability (I think it’s much more complicated than “it’s all in your head”), so let’s try another tack.
Let me divide the probabilities in any particular situation into two classes: immutable and mutable.
Immutable probabilities are the ones you can’t change. Note that “immutable” here implies “in this particular context”, so it’s a local and maybe temporary immutability. Mutable ones are those you can change.
Both of these you may or may not know precisely and if not, you can generate estimates.
In your lesion example, the probability for a person to get cancer is immutable. You may get a better estimate of what it is, but you can’t change it—it is determined by the presence or the absence of the lesion and you can’t do anything about that lesion.
Imagine two parallel universes where you looked at a “random person”, say, Alice, from your scenario. You ask her if she smokes. In universe A she says “Yes”, so your estimate of the probability of her getting cancer is now 85.55%. In universe B she says “None of your business”, so your estimate is still 45.5%.
Your estimates are quite different, and yet in both universes Alice’s chances of getting cancer are the same—because you improving your estimates did nothing to the physical world. The probability of her getting cancer is immutable, even when your estimate changes.
Compare this to the situation where you introduce a surgeon into your example. The surgeon can remove the lesion and after that operation the people with removed lesion are just like the people who never had it in the first place: they are unlikely to both get cancer and to smoke. For the surgeon the probability of getting cancer is mutable: he can actually affect it.
Let’s say the surgeon operates on Alice, his initial probability is 45.5%, after he opens her up and discovers the lesion it becomes 90% (but the actual probability of Alice getting cancer hasn’t changed yet!), and once he removes it, the probability become 1%. That’s an intervention—changing not just the estimate, but the actual underlying probability. Alice’s actual probability used to be 90% and now is 1%. For the surgeon it’s mutable.
This amounts to saying, “the probability that matters is the probability that I will get cancer, given that I have the lesion” or “the probability that matters it the probability that I will get cancer, given that I do not have the lesion.”
That’s what I’m denying. What matters is the probability that you will get cancer, period.
You are confusing things and probabilities. Getting cancer largely depends on having the lesion or not. But the probability of getting cancer depends, not on the thing, but on the probability of having the lesion. And the probability of having the lesion is mutable.
Getting cancer largely depends on having the lesion or not. But the probability of getting cancer depends, not on the thing, but on the probability of having the lesion.
Let me quote your own post where you set up the problem:
90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.
This is the probability of getting cancer which depends on the “thing”, that is, the lesion. It does NOT depend on the probability of having a lesion.
90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer
That, you are saying, are frequencies and not probabilities. OK, let’s continue:
Let’s suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.
The probability of having the lesion given a random person … will be 50%, and the probability of not having the lesion will be 50%.
So why having a lesion (as a function of being a human in this particular population) is a probability and having cancer (as a function of having a lesion) is a frequency?
50% of the people have the lesion. That is a frequency. But if you pick a random person, that person either has the lesion or not. The probability, and not the frequency (which is not meaningful in the case of such an individual), that the random person has the lesion is 50%, because that is our expectation that the person has the lesion.
The parallel still holds. If you pick a random person with the lesion, he will either develop cancer or not. The probability that the random person with the lesion develops cancer is 90%. Is that not so?
“Pick a random person with the lesion” has more than one meaning.
If you pick a random person out of the whole population, then the probability that he will develop cancer is 45.5%. This is true even if he has the lesion, if you do not know that he has the lesion, since the probability is your estimate.
If you pick a random person out of the population of people who have the lesion (and therefore you already know who has the lesion), then the probability that he will develop cancer is 90%.
Basically you are simply pointing out that if you know if you have the lesion, you will be better off smoking. That is true. In the same way, if you know whether Omega put the million in the box or not, you will be better off taking both boxes. Of course since you are maintaining a consistent position, unlike the others here, that isn’t going to bother you.
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
Yes, I two-box (LW tends to treat it as a major moral failing X-D)
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
And that’s precisely what I disagree with.
The difference is between doing an intervention, that is, changing something in the outside world, and adjusting your estimate which changes nothing in the outside world. “Not smoking” will lead you to adjust your estimate, but it’s not an intervention.
If that’s precisely what you disagree with, can you provide an example where you give numerical estimates of your expected utility for the two choices? Since the condition is that you do not know which is the case, you cannot say “utility X if the lesion or no-million, utility Y if not.” You have to say “estimated utility for one choice: X. Estimated utility for other choice: Y.”
Given the terms of the problem, it it mathematically impossible to provide estimates where two boxing or smoking will be higher, without those estimates being provably biased.
Regarding the supposed intervention, choosing not to smoke is an intervention, and that is what changes your estimate, and therefore your expected utility.
can you provide an example where you give numerical estimates of your expected utility for the two choices?
I don’t think that utility functions are a useful approach to human decision-making. However in this context if you specify that smoking is pleasurable (and so provides +X utility), I would expect my utility in the I-choose-to-smoke case to be X higher than in the I-choose-not-to-smoke case.
Note, though, that I would have different utilities for the I-want-to-smoke and I-do-not-want-to-smoke cases.
choosing not to smoke is an intervention
No, it is not since smoking here is not a cause which affects your chances of cancer.
Utility functions are idealizations. So if someone suggests that I use a specific utility function, I will say, “No, thank you, I intend to remain real, not become an idealization.” But real objects are also not circular or square in a mathematical sense, and that does not prevent circles and squares from being useful in dealing with the real world. In the same way it can be useful to use utility functions, and especially when you are talking about situations which are idealized anyway, like the Smoking Lesion and Newcomb.
Your specific proposal will not work, if it is meant to give specific numbers (and maybe you didn’t intend it to anyway). For example, we know there is about an 85% chance you will get cancer if you smoke, and about a 5% chance that you will get cancer if you don’t, given the terms of the problem. So if not getting cancer has significantly more value than smoking, then it is impossible for your answer to work out numerically, without contradicting those proportions.
And that is what you are trying to do: basically you are assuming that your choice is not even correlated with getting cancer, not only that it is not the cause. But the terms of the problem stipulate that your choice is correlated.
“which affects your chances of cancer”
It most certainly does affect the chance that matters, which is your subjective estimate. I pointed out before that people would act in the same way even if they knew that determinism was true. If it was, the chance of everything, in your sense, would either be 100% or 0%, and nothing you ever did would be an intervention, in your sense. But you would do the same things anyway, which shows that what you care about and act on is your subjective estimate.
it is impossible for your answer to work out numerically
The answer you got is the answer. It is basically an assertion that one real number is bigger than another real number. What do you mean by “work out numerically”?
basically you are assuming that your choice is not even correlated with getting cancer
Incorrect. My choice is correlated, it’s just not causal.
It most certainly does affect the chance that matters, which is your subjective estimate.
So, here is where we disagree. I do not think my subjective estimate is “the chance that matters”. For example, what happens if my subjective estimate is mistaken?
people would act in the same way even if they knew that determinism was true
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
I will illustrate how your proposal will not work out mathematically. Let’s suppose your default utility is 150, the utility of smoking is 10, and the utility of cancer is negative 100, so that total utility will be as follows:
no smoking and no cancer: 150.
smoking and no cancer: 160.
no smoking and cancer: 50.
smoking and cancer: 60.
You say that you expect to get 10 more utility by smoking than by not smoking. It is easy to see from the above schema why someone would think that, but it is also mistaken. As I said, if you are using a utility function, you do not say, “X utility in this case, Y utility in that case,” but you just calculate an average utility that you expect overall if you make a certain choice. Of course you are free to reject the whole idea of using a utility function at all, as you already suggested, but if you accept the utility function framework for the sake of argument, your proposal will not work, as I am about to explain.
This is how we would calculate your expected utility:
Expected utility of smoking = 150 + 10 - (100 * probability of cancer).
Expected utility of not smoking = 150 - (100 * probability of cancer).
You would like to say that the probability of cancer is either 90% or 1%, depending on the lesion. But that gives you two different values each for smoking and for not smoking, and this does not fit into the expected utility framework. So we have to collapse this to a single probability in each formula (even if the probability in the smoking case might not be the same as in the non-smoking case). What is that probability?
We might say that the probability is 45.5% in both cases, since we know that over the whole population, this number of people will get cancer. In that case, we would get:
Expected utility of smoking = 114.5.
Expected utility of not smoking = 104.5.
This is what you said would happen. However, it is easy to prove that these cannot be unbiased estimates of your utility. We did not stipulate anything about you which is different from the general population, so if these are unbiased estimates, they should come out equal to the average utility of the people who smoke and of the people who do not smoke. But the actual averages are:
Average utility of smokers: 150 + 10 - (100 * .8555) = 74.45.
Average utility of non-smokers: 150 - (100 * .0545) = 144.55.
So why are your values different from these? The reason is that the above calculation takes the probability of 45.5% and leaves it as is, regardless of smoking, which effectively makes your choice an independent variable. In other words, as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer, but is an entirely independent variable. This is contrary to the terms of the problem.
Since your choice is correlated with the lesion and therefore also with cancer, the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
For example, what happens if my subjective estimate is mistaken?
You will likely get bad results. You can’t fix that by acting on something different from your subjective estimate, because if you think something else is truer than your subjective estimate, then make that your subjective estimate instead. Your subjective estimate matters not because it is automatically right, but because you don’t and can’t have anything which is more right.
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
Consider this situation. Someone is going to work every day to earn money to support himself. Then, one day someone convinces him that determinism is true.
Now maybe determinism is true, and maybe it isn’t. The point is that he is now convinced that it is. What do you expect to happen:
A) The person says, “Either I have 100% chance of starving to death, or a 0% chance. So why should I bother to go to work? It will not affect my chances. Even if I starve to death precisely because of not going to work, it will just mean there was a 100% chance of me not going to work in the first place. I still don’t have any intervention that can change my chances of starving.”
B) The person says, “I might starve if I quit work, but I will probably survive if I keep going to work. So I will keep going to work.”
Determinism as such is not inconsistent with either of these. It is true that if determinism is actually true, then whatever he does, he had a 100% chance of doing that. But there is nothing in the abstract picture to tell you which he is going to do. And in any case, I don’t need to assume that determinism is true. The question is what the person will do, who thinks it is true.
Most people, quite rightly, would expect the second thing to happen, and not the first. That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance. And if we would do the second thing ourselves, that implies that we are acting on subjective estimates and not on objective chances.
as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer
This is incorrect, as I pointed out a comment or two upthread.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
And will you also assert that you can change your expected utility by not smoking?
For example, what happens if my subjective estimate is mistaken?
You will likely get bad results.
Unroll this, please. What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance.
Huh? I don’t understand either why your example shows this or why do you think these two things are mutually exclusive opposites.
This is incorrect, as I pointed out a comment or two upthread.
I am explaining why it is correct. Basically you are saying that you cannot change the chance that you will get cancer. But your choice and cancer are correlated variables, so changing your choice changes the expected value of the cancer variable.
You seem to be thinking that it works like this: there are two rows of coins set so that the coin in each row is on the same side as the coin in the other row: when one is heads, the other is heads, and when one is tails the other is tails. Now if you go in and flip over one of the coins, the other will not flip. So the coins are correlated, but flipping one over will not change what the other coin is.
The problem with the coin case is that there is a pre-existing correlation and when you flip a coin, of course it will not flip the other. This means that flipping a coin takes away the correlation. But the correlation between your choice and cancer is a correlation with -your choice-, not with something that comes before your choice. So making a choice determines the expected value of the cancer variable, even if it cannot physically change whether you get cancer. If it did not, your choice would be taking away the correlation, just like you take away the correlation in the coin case. That is why I said you are implicitly assuming that your choice is not correlated with cancer: you are admitting that other people’s choices are correlated, and so are like the rows of coins sitting there, but you think your choice is something that comes later and will take away the correlation in your own case.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
I did not refuse to recognize such a distinction, although it is true that your estimate is part of the world, so updating your estimate is also changing the world. But the main point is that the estimate is what matters, not whether or not your action changes the world.
And will you also assert that you can change your expected utility by not smoking?
Yes. Before you decide whether to smoke or not, your expected utility is 109.5, because this is the average utility over the whole population. If you decide to smoke, your expected utility will become 74.45, and if you decide not to smoke, it will become 144.55. The reason this can happen is because “expected utility” is an expectation, which means that it is something subjective, which can be changed by the change in your estimate of other probabilities.
But note that it is a real expectation, not a fake one: if your expected utility is 144, you expect to get more utility than if your expected utility is 74. It would be an obvious contradiction to say that your expected utility is higher, but you don’t actually expect to get more.
What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
That depends in what direction your estimate is wrong. You personally would be more likely to get cancer in that situation, since you would mistakenly assume that smoking will not make it more likely that you would get cancer, and therefore you would not avoid smoking.
I don’t understand either why your example shows this or why you think these two things are mutually exclusive opposites.
The person who decides to stop going to work, does that because he cannot change the objective chance that he is going to starve to death. The person who decides to keep going to work has a subjective estimate that he is more likely to survive if he keeps going to work.
This is exactly parallel to the situation we are discussing. Consider a Deterministic Smoking Lesion: 100% of people with the lesion get cancer, no one else gets cancer, and 100% of the people with the lesion choose to smoke, and no one else chooses to smoke. By your way of arguing, it is still true that you cannot change whether you have the lesion or not, so you might as well smoke. That is exactly the same as the person who says that he might as well stop going to work. On the other hand, the person who decides to keep going to work is exactly the same as someone who says, “I cannot physically determine whether I have the lesion or not. However, if I choose not to smoke, I will be able to estimate that I do not have the lesion and will not get cancer. After choosing not to smoke, my subjective estimate of the probability of getting cancer will drop to 0%. So I will not smoke.”
Since we don’t seem to be getting anywhere on this level, let’s try digging deeper (please ignore the balrog superstitions).
Here we are talking about a “choice”. That word/concept is very important in this setup. Let’s dissect it.
I will assert that a great deal of confusion around the Smoking Lesion problem (and others related to it) arises out of the dual meaning attached to the concept of “choice”. There are actually two distinct things happening here.
Thing one is acquiring information. When you decide to smoke, this provides you with new, relevant information and so you update your probabilities and expected utilities accordingly. Note that for this you don’t have to do anything; you just learn, it’s passive acquisition of knowledge. Thing one is what you are focused on.
Thing two is acting, doing something in the physical world. When you decide to smoke, you grab a cigarette (or a pipe, or a cigar, or a blunt, or...) and take a drag. This is an action with potential consequences in reality. In the Smoking Lesion world your action does nothing (except give you a bit of utility) -- it’s not causal and does not change your cancer probabilities.
It is not hard to disassemble a single “choice” into its two components. Let’s stop at the moment of time when you already decided what to do but haven’t done anything yet. At this moment you have already acquired the information—you know what you want / what you have decided—but no action happened. If you don’t want to freeze time imagine the Smoking Lesion problem set on an island where there is absolutely nothing to smoke.
Here the “acquire information” component happened, but the “action” component did not. And does it make the problem easier? Sure, it makes it trivial: you just update on the new information, but there was no action and so we don’t have to concern ourselves with its effect (or lack of it), with causality, with free will, etc.
So I would suggest that the issues with Smoking Lesion are the result of conflating two different things in the single concept of “choice”. Disentangle them and the confusion should—hopefully? -- dissipate or at least lessen.
We can break it down, but I suggest a different scheme. There are three parts, not two. So:
At 1:00 PM, I have the desire to smoke.
At 2:00 PM, I decide to smoke.
At 3:00 PM, I actually smoke.
Number 3 is the action. The choice is number 2, and I will discuss that in a moment. But first, note that the #1 and #2 are not the same. This is clear for two reasons. First, smoking is worth 10 utility for everyone. So everyone has the same desire, but some people decide to smoke, and some people decide not to. Even in real life not everyone who has the desire decides to do it. Some people want it, but decide not to.
Second, when I said that the lesion is correlated with the choice, I meant it is correlated with number 2, not number 1. If it was correlated with number 1, you could say, “I have the desire to smoke. So I likely have the lesion. But I can go ahead and smoke; it won’t make cancer any more likely.” And that argument, in that situation, would be correct. That would be exactly the same as if you knew in advance whether or not you had the lesion. If you already know that, smoking will give you more utility. In the same way, in Newcomb, if you know whether or not the million is in the box before you choose, you should take both boxes.
The argument does not work when the correlation is with number 2, however, and we will see why in a moment.
Number 2 does not include the action (which is number 3), but it includes something besides information. It includes the plan of doing number 3, which plan is the direct cause of number 3. It also includes information, as you say, but you cannot have that information without also planning to do 3. Here is why. When you have the desire, you also have the information: “I have the desire.” And in the same way, when you start planning to smoke, you acquire the information, “I am now planning to smoke.” But you do NOT have that information before you start planning to smoke, since it is not even true until then.
When you are deciding whether to smoke or not, you do not yet have the information about whether you are planning to smoke or not, because you have no such plan yet. And you cannot get that information, without forming the plan at the same time.
The lesion is correlated with the plan. So when 2 happens, you form a plan. And you acquire some information, either “I am now planning to smoke,” or “I am now planning not to smoke.”
And that gives you additional information: either “very probably, I had the lesion an hour ago,” or “very probably, I did not have the lesion an hour ago.”
You suppose that this cannot happen, since either you have the lesion or not, from the beginning. But notice that “at 2:00 PM I start planning to smoke” and “at 2 PM I start planning not to smoke,” cannot co-exist in the same world. And since they only exist in different worlds, there should be nothing surprising about the fact that the past of those worlds is probably different.
I don’t see the point of your number 1. If, as you say, everyone has the desire then it contains no information and is quite irrelevant. I also don’t understand what drives the decision to smoke (or not) if everyone wants the same thing.
And you cannot get that information, without forming the plan at the same time.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them. Is there really the need for this hair-splitting here?
I could have left it out, but I included it in order to distinguish it from number 2, and because I suspected that you were thinking that the lesion was correlated with the desire. In that situation, you are right that smoking is preferable.
I also don’t understand what drives the decision to smoke (or not)
Consider what drives this kind of decision in reality. Some people desire alcohol and drink; some people desire it but do not drink. Normally this is because the ones who drink that it will be good overall, while the ones who don’t, think it will be bad overall.
In this case, we have something similar: people who think “smoking cannot change whether I have the lesion or not, so I might as well smoke” will probably plan to smoke, while people who think “smoking will increase my subjective estimate that I have the lesion,” will probably plan not to smoke.
Looking at this in more detail, consider again the Deterministic Smoking Lesion, where 100% of the people with the lesion choose to smoke, and no one else does. What is driving the decision in this case is obviously the lesion. But you can still ask, “What is going on in their minds when they make the decision?” And in that case it is likely that the lesion makes people think that smoking makes no difference, while not having the lesion lets them notice that smoking is a very bad idea.
In the case we were considering, there was a 95% correlation, not a 100% correlation. But a high correlation is on a continuum with the perfect correlation; just as the lesion is completely driving the decision in the 100% correlation case, it is mostly driving the decision in the 95% case. So basically the lesion tends to make people think like Lumifer, while not having the lesion tends to make people think like entirelyuseless.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them.
If you do that, obviously you not planning to carry out all of those plans, since they are different. You are considering them, not yet planning to do them. Number 2 is once you are sure about which one you plan to do.
You are basically saying that there is no way to know what you are going to do before you actually do it. I don’t find this to be a reasonable position.
Situations when this happens exist—typically they are associated with internal conflict and emotional stress—but they are definitely edge cases. In normal life your deliberate actions are planned (if only a few seconds beforehand) and you can reliably say what you are going to do just before you actually do it.
Humans possess reflection, the ability to introspect, and knowing what you are going to do almost always precedes actually doing it. I am not sure why do you want to keep on conflating knowing and doing.
You are basically saying that there is no way to know what you are going to do before you actually do it.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
If I decide to smoke but take no action, is there any problem?
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
It isn’t clear what you mean by “is there any problem?”
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
Another possible rebuttal, even retaining the analytical meaning of the word choose, is to complete the demonstration appealing to the “if I can” in point 5 with:
6a—since you cannot choose free will, then it doesn’t exist.
So, for the argument to be complete, a believer would have to add:
6b—I choose to believe in free will.
But then you have to show that you actually chose conclusion 6b instead of simply having reached it, which I don’t think can be done.
Richard uses “choose” in this way, but I don’t. I don’t think choosing means you necessarily have libertarian free will.
In any case I am not sure this is a great rebuttal, since someone can say, “maybe it’s choosing, maybe it’s not. But at least I am going to try to choose.”
Neither do I, but since it’s just words, we can be gracious and grant the usage in the limited context of the philosophical discussion. Although your formulation is way clearer.
Well, anyone can do as anyone pleases, but the argument rested on choosing, not reaching the conclusion, so: if it is, show it, if it’s not, the argument is invalid.
It is a practical argument. It is not meant to prove that people have libertarian free will. It is not even meant to show that it is probable. It is meant to show that “trying to convince yourself to believe that you have libertarian free will is a good idea.”
That is why I brought up the badness of being wrong about that, because it is an argument about what is good to do, not an argument about what is true. This response applies to your other comment as well.
I feel that the crux of the argument shows that choosing (in the analytical sense) libertarian free will is a good idea, but only if you can. It lacks the part where also reaching the conclusion of libertarian free will is a good idea.
It doesn’t need to say that reaching the conclusion is good. It just has to show that trying as hard as you can to reach that conclusion is good. The argument is meant to be that making that an attempt has a potential upside (succeeding when you actually have libertarian free will) and no potential downside (because if you succeed and the conclusion is false, it is not your fault.)
That’s wrong because not being your fault does not mean it is not a downside, just as it is a downside of smoking in the smoking lesion, if you get cancer, even if it is not your fault.
That is the part of the argument that is missing from the original formulation, and assuming it I think does a disservice to your analysis and the original argument too.
It certainly does not do a disservice to the original argument, since it is the only way it would ever convince someone.
That said, obviously I disagree with that, since I think you should not smoke in the Smoking Lesion case.
I think that the argument is not so much that if you succeed in incorrectly convincing yourself that you have (libertarian) free will, it is not your fault. Instead, I think the argument is that success in willfully convincing yourself that you have free will (or convincing yourself of anything else, for that matter) implies that you have free will. If you didn’t have free will, then you did not really willfully convince yourself of anything—instead, your belief (or lack thereof) in free will is just something that happened.
Sure, but the question is why you should try to convince yourself of libertarian free will, instead of trying to convince yourself of the opposite. If you succeed in the first case, it shows you are right, but if you succeed in the second, it shows you are wrong.
It seems like you answered the question yourself when you said:
Surely it is better to be right than wrong, right?
Yes. I was trying to explain how the argument is supposed to work.
OK. Sorry to have misunderstood.
So, I don’t see the flaw in the argument. Clearly the argument doesn’t really demonstrate that we have free will, but I don’t think that it is intended to do that. It does seem to make the case that if you want to be right about free will, you should try to convince yourself that you have free will.
What am I missing?
That depends. If you think that you should take both boxes in Newcomb, and that you should smoke in the Smoking Lesion, then you are consistent in also thinking that you should try to convince yourself that you have free will. But if you disagree with one of them but not all, your position is inconsistent.
I disagree with all three, and the argument is implied in my other post about Newcomb and the lesion. In particular, in the case of convincing yourself, the fact that it would be bad to believe something false, is a reason not to convince yourself (unless the evidence supports it) even if it is merely something that happens, just like cancer is a reason not to smoke even though it would be just something that happens.
Got it.
I am, per your criteria, consistent. Per Newcomb, I’ve always been a two-boxer. One of my thoughts about Newcomb was nicely expressed in a recent posting by Lumifer.
Per the smoking lesion—as a non-smoker with no desire to smoke and a belief that smoking causes cancer, I’ve never gotten past fighting the hypothetical. However, I just now made the effort and realized that within the hypothetical world of the smoking lesion, I would choose to smoke.
And, I think the argument in favor of trying to convince yourself that you have free will has merit. I do have a slight concern about the word “libertarian” in your formulation of the argument, which is why I omitted it or included it parenthetically. My concern is that under a compatibilist conception of free will, it would be possible to willfully convince yourself of something even if determinism is true. But, if you remove the word “libertarian”, it seems reasonable that a person interested in arriving at truth should attempt to convince himself/herself that he/she has free will.
ETA:
In the parent post, you said:
But here you called the free will argument bad.
Which is it? Is the argument bad or is it only inconsistent with one-boxing and not smoking?
It seems to me that even if we accept your argument that to be consistent one must either two-box, smoke, and try to convince oneself that one has free will, or one box, not smoke, and not try to convince oneself that one has free will, you still have not made the case that the arguments in favor of two-boxing, smoking or trying to convince oneself that one has fee will are bad arguments.
Intellectually I have more respect for someone who holds a consistent position on these things than someone who holds an inconsistent position. The original point was a bit ad hominem, as most people on LW were maintaining Eliezer’s inconsistent position (one-boxing and smoking).
However, if we speak of good and bad in terms of good and bad results, all three positions (two-boxing, smoking, and convincing yourself of something apart from evidence) are bad in that they have bad results (no million, cancer, and potentially believing something false.) In that particular sense you would be better off with an inconsistent position, since you would get good results in one or more of the cases.
I thought I did, sort of, make the case for that in the post on the Alien Implant and the comments on that. It’s true that it’s not much of a case, since it is basically just saying, “obviously this is good and that’s bad,” but that’s how it is. Here is Scott Alexander with a comment making the case:
In other words, once the correlation is strong enough, the fact that you know for sure that something bad will happen if you make that choice, is enough reason not to make the choice, despite your reasoning about causality.
And once you realize that this is true, you will realize that it can be true even when the correlation is less than 100%, although the effect size will be smaller.
Not really—one of the main points of the smoking lesion is that smoking doesn’t cause cancer. It seems to me that to choose not to smoke is to confuse correlation with causation—smoking and cancer are, in the hypothetical world of the smoking lesion, highly correlated but neither causes the other. To think that opting not to smoke has a health benefit in the world of the smoking lesion is to engage in magical thinking.
Similarly, Newcomb may be an interesting way of thinking about precommitments and decision theories for AGIs, but the fact remains that Omega has made its choice already—your choice now doesn’t affect what’s in the box. Nozick’s statement of Newcomb is not asking if you want to make some sort of precommitment—it is asking you what you want to do after Omega has done whatever Omega has done and has left the scene. Nothing you do at that point can affect the contents of the boxes.
And, willfully choosing to convince yourself that you have free will and then succeeding in doing so cannot possibly lead one astray for the obvious reason that if you don’t have free will, you can’t willfully choose to do anything. If you willfully choose to convince yourself that you have free will then you have free will.
In the original statement of the smoking lesion, we don’t know for sure that something bad will happen if we smoke. It states that smoking is “strongly correlated with lung cancer, not that the correlation is 100%. And, even if the past correlation between A and B was 100%, there is no reason to assume that the future correlation will be* 100%, particularly if A does not cause B.
The only reason a high correlation is meaningful input into a decision is because it suggests a possible causal relationship. Once you understand the causal factors, correlation no longer provides any additional relevant information.
I am not confusing correlation and causation. I am saying that correlation is what matters, and causation is not.
It would be, if you thought that not smoking caused you not to get cancer. But that is not what I think. I think that you will be less likely to get cancer, via correlation. And I think being less likely to get cancer is better than being more likely to get it.
I agree, and the people here arguing that you have a reason to make a precommitment now to one-box in Newcomb are basically distracting everyone from the real issue. Take the situation where you do not have a precommitment. You never even thought about the problem before, and it comes on you by surprise.
You stand there in front of the boxes. What is your estimate of the chance that the million is there?
Now think to yourself: suppose I choose to take both. Before I open them, what will be my estimate of the chance the million is there?
And again think to yourself: suppose I choose to take only one. Before I open them, what will be my estimate of the chance the million is there?
You seem to me to be suggesting that all three estimated chances should be the same. And I am not telling you what to think about this. If your estimates are the same, fine. And in that case, I entirely agree that it is better to take both boxes.
I say it is better to take one box if and only if your estimated chances are different for those cases, and your expected utility based on the estimates will be greater using the estimate that comes after choosing to take one box.
Do you disagree with that? That is, if we assume for the sake of argument that your estimates are different, do you still think you should always take both? Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.
This is why I am saying that correlation matters, not causation.
This is partly true, but what you don’t seem to realize is that the direction of the causal relationship does not matter. That is, the reason you are saying this is that e.g. if you think that a smoking lesion causes cancer, then choosing to smoke will not make your estimate of the chances you will get cancer any higher than if you choose not to smoke. And in that case, your estimates do not differ. So I agree you should smoke in such a case. But—if the lesion is likely to cause you to engage in that kind of thinking and go through with it, then choosing to smoke should make your estimate of the chance that you have the lesion higher, because it is likely that the reason you are being convinced to smoke is that you have the lesion. And in that case, if the chance increases enough, you should not smoke.
I do not understand why you think that (I suspect the point of this thread is to explain why, but in spite of that, I do not understand).
Yes, that is what I am saying.
No. Nowhere in Nozick’s original statement of Newcomb’s problem is any indication that Omega is omniscient to be found. All Nozick states regarding Omega’s prescience is that you have “enormous confidence” in the being’s power to predict, and that the being has a really good track record of making predictions in the past. Over the years, the problem has morphed in the heads of at least some LWers such that Omega has something resembling divine foreknowledge; I suspect that this is the reason behind at least some LWers opting to “one box”.
Yes, I agree with that – choosing to smoke provides evidence that you have the lesion.
No. The fact that you have chosen to smoke may provide evidence that you have the lesion, but it does not increase the chances that you will get cancer. Think of this example:
Suppose that 90% of people with the lesion get cancer, and 99% of the people without the lesion do not get cancer.
Suppose that you have the lesion. In this case the probability that you will get cancer is .9, independent of whether or not you smoke.
Now, suppose that you do not have the lesion. In this case the probability that you will get cancer is .01, independent of whether or not you smoke.
You clearly either have the lesion or do not have the lesion. That was determined long before you made a choice about smoking, and your choice to or not to smoke does not change whether or not you have the lesion.
So, since the probability of a person with the lesion to get cancer is unaffected by his/her choice to smoke (it is .9), and the probability of a person without the lesion to get cancer is likewise unaffected by his/her choice to smoke (it is .01), then if you want to smoke you ought to go ahead and smoke; it isn’t going to affect the likelihood of your getting cancer (albeit your health insurance rates will likely go up, since it provides evidence that you have the lesion and will likely get cancer).
You agreed that you are saying that the three estimated chances are the same. That is not consistent with admitting that your choices are evidence (at least for you) one way or another—if they are evidence, then your estimate should change depending on which choice you make.
Look at Newcomb in the way you wanted to look at the lesion. Either the million is in the box or it is not.
Let’s suppose that you look at the past cases and it was there some percentage of the time. We can assume 40% for concreteness. Suppose you therefore estimate that there is a 40% chance that the million is there.
Suppose you decide to take both. What is your estimate, before you check, that the million is there?
Again, suppose you decide to take one. What is your estimate, before you check, that the million is there?
You seem to me to be saying that the estimate should remain fixed at 40%. I agree that if it does, you should take both. But this is not consistent with saying that your choice (in the smoking case) provides evidence you have the lesion; this would be equivalent to your choice to take one box being evidence that the million is there.
We don’t have to make Omega omniscient for there to be some correlation. Suppose that 85% of the people who chose one box found the million, but because many people took both, the total percentage was 40%. Are you arguing in favor of ignoring the correlation, or not? After you decide to take the one box, and before you open it, do you think the chance the million is there is 40%, or 85% or something similar?
I am saying that a reasonable person would change his estimate to reflect more or less the previous correlation. And if you do, when I said “you may be certain,” I was simply taking things to an extreme. We do not need that extreme. If you think the million is more likely to be there, after the choice to take the one, than after the choice to take both, and if this thinking is reasonable, then you should take one and not both.
Mea culpa, I was inconsistent. When I was thinking of Newcomb, my rationale was that I already know myself well enough to know that I am a “two-boxing” kind of person, so actually deciding to two-box does not really provide (me) any additional evidence. I could have applied the same logic in the smoking lesion – surly the fact that I want to smoke is already strong evidence that I have the lesion and actually choosing to smoke does not provide additional evidence.
In fact, in both cases, actually choosing to “one box” or “two box”, or to smoke or not to smoke, does provide evidence to an outside observer (hence my earlier quip about choosing to smoke will cause your insurance rates to increase) , and may provide new evidence to the one making the choice, depending on his/her introspective awareness (if he/she is already very in touch with his/her thoughts and preferences then actually making the choice may not provide him/her much more in the way of evidence).
However, whether or not my choice provides me evidence is a red herring. It seems to me that you are confusing the idea of increasing or decreasing your confidence that a thing will (or did) happen with the idea of increasing or decreasing the probability that it actually will (or did) happen. These two things are not the same, and in the case of the smoking lesion hypothetical, you should not smoke only if smoking increases the probability of actually getting cancer – merely increasing your assessment of the likelihood that you will get cancer is not a good reason to not smoke.
Similarly, even if choosing to open both boxes increases your expectation that Omega put nothing in the second box, the choice did not change whether or not Omega actually did put nothing in the second box.
Yes, I am arguing in favor of ignoring the correlation. Correlation is not causation. Omega’s choice has already been made – nothing that I do now will change what’s in the second box.
While I do think those are the same thing as long as your confidence is reasonable, I am not confusing anything with anything else, and I understand what you are trying to say. It just is not relevant to decision making, where what is relevant is your assessment of things.
In other words, from my point of view, “the probability a thing will happen” just is your reasonable assessment, not an objective feature of the world.
Suppose we found out that determinism was true: given the initial conditions of the universe, one particular result necessarily follows with 100% probability. If we consider “the probability a thing will happen” as an objective feature of the world, then in this situation, everything has a probability of 100% or 0%, as an objective feature. Consequently, by your method of decision making, it does not matter what you do, ever; because you never change the probability that a thing will actually happen, but only your assessment of the probability.
Obviously, though, if we found out that determinism was true, we would not suddenly stop caring about our decisions; we would keep making them in the same way as before. And what information would we be using? We would obviously be using our assessment of the probability that a result would follow, given a certain choice. We could not be using the objective probabilities since we could not change them by any decision.
So if we would use that method if we found out that determinism was true, we should use that method now.
Again, every time I brought up the idea of a perfect correlation, you simply fought the hypothetical instead of addressing it. And this is because in the situation of a perfect correlation, it is obvious that what matters is the correlation and not causation: in Scott Alexander’s case, if you know that living a sinful life has 100% correlation with going to hell, that is absolutely a good reason to avoid living a sinful life, even though it does not change the objective probability that you will go to hell (which would be either 100% or 0%).
When you choose an action, it tells you a fact about the world: “I was a person who would make choice A” or “I was a person who would make choice B.” And those are different facts, so you have different information in those cases. Consider the Newcomb case. You take two boxes, and you find out that you are a person who would take two boxes (or if you already think you would, you become more sure of this.) If you took only one box, you would instead find out that you were a person who would take one box. In the case of perfect correlation, it would be far better to find out you were a person who take one, than a person who would take two; and likewise even if the correlation is very high, it would be better to find out that you are a person who would take one.
You answer, in effect, that you cannot make yourself into a person who would take one or two, but this is already a fixed fact about the world. I agree. But you already know for certain that if you take one, you will learn that you are a person who would take one, and if you take both, you will learn that you are a person who would take both. You will not make yourself into that kind of person, but you will learn it nonetheless. And you already know which is better to learn, and therefore which you should choose.
The same is true about the lesion: it is better to learn that you do not have the lesion, than that you do, or even that you most likely do not have it, rather than learning that you probably have it.
You stated that you think that the idea of increasing or decreasing your confidence that a thing will (or did) happen and the idea of increasing or decreasing the probability that it actually will (or did) happen are “the same thing as long as your confidence is reasonable”. I disagree with the idea that the probability that a thing actually will (or did) happen is the same as your confidence that a thing will (or did) happen, as illustrated by these examples:
John’s wife died under suspicious circumstances. You are a detective investigating the death. You suspect John killed his wife. Clearly, John either did or did not kill his wife, and presumably John knows which of these is the case. However, as a detective, as you uncover each new piece of evidence, you will adjust your confidence that John killed his wife either up or down, depending on whether the evidence supports or refutes the idea that John killed his wife. However, the evidence does not change the fact of what actually happened – it just changes your confidence in your assessment that John killed his wife. This example is like the Newcomb example – Omega either did or did not put $1M in the second box – any evidence that you obtain based on your choice to one box or two box may change your assessment of the likelihood, but it does not affect the reality of the matter.
Suppose I put 900 black marbles and 100 white marbles in an opaque jar and mix them more-less uniformly. I now ask you to estimate the probability that a marble selected blindly from the jar will be white, and then to actually remove a marble, examine it and replace it. This is repeated a number of times, and each time the marble is replaced, the contents of the jar is mixed. Suppose that due to luck, your first four picks yield two white marbles and two black marbles. You will probably assess the likelihood of the next marble being white at or around .5. However, after an increasing number of trials, your estimate will begin to converge on .1. However, the actual probability has been .1 all along – what has changed is your assessment of the probability. This is like the smoking lesion hypothetical where your decision to smoke may increase your assessment of the probability that you will get cancer, but does not affect the actual probability that you will get cancer.
In both the examples listed above, there is an objective reality (John either did or did not kill his wife, and the probability of selecting a white marble is .1), and there is your confidence that John killed his wife, and your estimation of the probability of selecting a white marble. These things all exist, and they are not the same.
You brought up the idea of omniscience when you said:
and I addressed it by pointing out that omniscience is not a part of Newcomb. Perfect correlation likewise is not a part of the smoking lesion. Perfect correlation of past trials is an aspect of the Newcomb problem, but perfect correlation of past trials is not really qualitatively different from a merely high correlation, as it does not imply that “you may be certain you will get the million if you take one box, and certain that you will not, if you take both” in the same way that flipping a coin 6 times in a row and getting heads each time does not imply that you will forever more get heads each time you flip that coin. I did consider perfect correlation of past trials in the Newcomb problem, because it is built in to Nozick’s statement of the problem. And, perfect correlation of past trials in the smoking lesion, while not part of the smoking lesion as originally stated, does not change my decision to smoke.
I was not fighting the hypothetical when I stated that omniscience is not part of Newcomb – I merely pointed out that you changed the hypothetical; a Newcomb with an omniscient Omega is a different problem than the one proposed by Nozick. I am sticking with Nozick’s and Egan’s hypotheticals.
It is true that I did not address Yvain’s predestination example. I did not find it to be relevant because Calvinist predestination involves actual predeterminism and omniscience, neither of which is anywhere suggested by Nozick. In short, Yvain has invented a new, different hypothetical; if we can’t agree on Newcomb, I don’t see how adding another hypothetical into the mix helps.
I have stated my position with the most succinct example that I can think of, and you have not addressed that example. The example was:
A similar example can be made for two-boxing:
You are either a person whom Omega thinks will two-box or you are not. Based on Omega’s assessment it either will or will not place $1M in box two.
Only after Omega has done this will it make its offer to you.
Your choice to one-box or two-box may change your assessment as to whether Omega has placed $1M in box two, but it does not change whether Omega actually has placed $1M in box two.
If Omega placed $1M in box two, your expected utility (measured in $) is: $1M if you one-box, $1.001M if you two-box
If Omega did not place $1M in box two, your expected utility is: $0 if you one-box $1K if you two-box
Your choice to one-box vs two box does not change whether Omega did or did not put $1M in box two; Omega had already done that before you ever made your choice.
Therefore, since your expected utility is higher when you two-box regardless of what Omega did, you should two-box.
I don’t know that I can explain my position any more clearly than that; I suspect that if we are still in disagreement, we should simply agree to disagree (regardless of what Aumann might say about that :) ). After all, Nozick stated in his paper that it is quite difficult to obtain consensus on this problem:
Also, I do agree with your original point – Newcomb and the smoking lesion are equivalent in that similar reasoning that would lead one to one-box would likewise lead one to not smoke, and similar reasoning that would lead one to two-box would lead one to smoke.
I did not disagree that you can talk about “actual probabilities” in the way that you did. I said they are irrelevant to decision making, and I explained that using the example of determinism. This is also why I did not comment on your detailed scenario; because it uses the “actual probabilities” in the way which is not relevant to decision making.
Let me look at that in detail. In your scenario, 90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.
Let’s suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.
The probability of having the lesion given a random person (and it doesn’t matter whether you call this an actual probability or a subjective assessment—it is the number of people) will be 50%, and the probability of not having the lesion will be 50%.
Your argument that you should smoke if you want does not consider the correlation between having the lesion and smoking, of course because you consider this correlation irrelevant. But it is not irrelevant, and we can see that by seeing what happens when he consider it.
Suppose 95% of people with the lesion choose to smoke, and 5% of the people with the lesion choose not to smoke. Similarly, suppose 95% of the people without the lesion choose not to smoke, and 5% of the people without the lesion choose to smoke.
Given these stipulations it follows that 50% of the people smoke, and 50% do not.
For a random person, the total probability of getting cancer will be 45.5%. This is an “actual” probability: 45.5% of the total people will actually get cancer. This is just as actual as the probability of 90% that a person with the lesion gets it. If you pick a random person with the lesion, 90% of such random choices will get cancer; and if you pick a random person from the whole group, 45.5% of such random choices will get cancer.
Before you choose, therefore, your estimated probability of getting cancer will be 45.5%. You seem to admit that you could have this estimated probability, but want to say that the “real” probability is either 90% or 1%, depending on whether you have the lesion. But in fact all the probabilities are equally real, depending on your selection process.
What you ignored is the probability that you will get cancer given that you smoke. From the above stipulations, it follows of necessity that 85.55% of people choosing to smoke will get cancer, and 5.45% of people choosing not to smoke will get it. You say that this changes your estimate but not the “real probability.” But this probability is quite real: it is just as true that 85.55% of smokers will get cancer, as that 90% of people with the lesion will.
This is the situation I originally described, except not as extreme. If you smoke, you will be fairly sure (and with a calibrated judgement of probability) that you will get cancer, and if you do not, you will be fairly sure that you will not get cancer.
Let’s look at this in terms of calculating an expected utility. You suggest such a calculation in the Newcomb case, where you get more expected utility by taking two boxes, whether or not Omega put the million there. In the same way, in the smoking case, you think you will get more utility by smoking, whether or not you have the lesion. But notice that you are calculating two different values, one in case you have the lesion or the million, and one where you don’t. In real life you have to act without knowing whether the lesion or the million is there. So you have to calculate an overall expected utility.
What would that be? It is easy to see that it is impossible to calculate an unbiased estimate of your expected utility which says overall that you will get more by taking two boxes or by smoking. This is absolutely necessary, because on average the people who smoke get less utility, and the people who take two boxes also get less utility, if there is a significant correlation between Omega’s guess and people’s actions.
Let’s try it anyway. Let’s say the overall odds of the million being there are 50⁄50, just like we had 50⁄50 odds of the lesion being there. According to you, your expected utility from taking two boxes will be $501,000, calculating your expectation from the “real” probabilities. And your expected utility from taking one box will be $500,000.
But it is easy to see that it is mathematically impossible for those to be unbiased estimates if there is some correlation between the person’s choice and Omega’s guess. E.g. if 90% of the people that are guessed to be one-boxers, also take just one box, and 90% of the people that are guessed to be two-boxers, also take two boxes, then the average utility from taking one box will be $900,000, and the average utility from taking both will be $100,100.90. These are “actual” utilities, that is, they are the average that those people really get. This proves definitively that estimates that say that you will get more by taking two are necessarily biased estimates.
But, you will say, your argument shows that you absolutely must get more by taking both. So what went wrong? What went wrong, is that your argument implicitly assumed that there is no correlation between your choice and what is in the box, or in the smoking case, whether you have the lesion. But this is false by the statement of the problem.
It is simply a mathematical necessity from the statement of the problem that your expected utility will be higher by one boxing and by not smoking (given a high enough discrepancy in utilities and high enough correlations). This is why I said that correlation matters, not causation.
This is not so. You are confused between two kinds of uncertainty (and so, probability): the uncertainty of the actual outcome in the real, physical world, and the uncertainty of some agent not knowing the outcome.
Let’s unroll this. The actual probability for a random person to get cancer is either 90% or 1%. You just don’t know which one of these two numbers applies, so you produce an estimate by combining them. Your estimate doesn’t change anything in the real world and someone else—e.g. someone who has access to the lesion-scanning results for this random person—would have a different estimate.
Note, by the way, the difference between speaking about a “random person” and about the whole population. For the population as a whole, the 45.5% value is correct: out of 1000 people, about 455 will get cancer. But for a single person it is not correct: a single person has either a 90% actual probability or a 1% actual probability.
For simplicity consider an urn containing an equal number of white and black balls. You would say that a “random ball” has a 50% chance of being black—but each ball is either black or white, it’s not 50% of anything. 50% of the entire set of balls is black, true, but each ball’s state is not uncertain and is not subject to (“actual”) probability.
“You just don’t know which” is what probability is. The “actual” probability is the probability conditional on all the information we actually have, namely 45.5%; 90% or 1% would be the probability if, contrary to the fact, we also knew whether the person has the lesion.
See my answer to entirelyuseless.
The problem does not say that any physical randomness is involved. The 90% of those with the lesion may be determined by entirely physical and determinate causes. And in that case, the 90% is just as “actual” or “not actual”, whichever you prefer, as the 45.5% of the population who get cancer, or as the 85.55% of smokers who get cancer.
Second, physical randomness is irrelevant anyway, because the only way it would make a difference to your choice would be by making you subjectively uncertain of things. As I said in an earlier comment, if we knew for certain that determinism was true, we would make our choices in the same way we do now. So the only uncertainty that is relevant to decision making is subjective uncertainty.
I don’t want to get into the swamp of discussing the philosophy of uncertainty/probability (I think it’s much more complicated than “it’s all in your head”), so let’s try another tack.
Let me divide the probabilities in any particular situation into two classes: immutable and mutable.
Immutable probabilities are the ones you can’t change. Note that “immutable” here implies “in this particular context”, so it’s a local and maybe temporary immutability. Mutable ones are those you can change.
Both of these you may or may not know precisely and if not, you can generate estimates.
In your lesion example, the probability for a person to get cancer is immutable. You may get a better estimate of what it is, but you can’t change it—it is determined by the presence or the absence of the lesion and you can’t do anything about that lesion.
Imagine two parallel universes where you looked at a “random person”, say, Alice, from your scenario. You ask her if she smokes. In universe A she says “Yes”, so your estimate of the probability of her getting cancer is now 85.55%. In universe B she says “None of your business”, so your estimate is still 45.5%.
Your estimates are quite different, and yet in both universes Alice’s chances of getting cancer are the same—because you improving your estimates did nothing to the physical world. The probability of her getting cancer is immutable, even when your estimate changes.
Compare this to the situation where you introduce a surgeon into your example. The surgeon can remove the lesion and after that operation the people with removed lesion are just like the people who never had it in the first place: they are unlikely to both get cancer and to smoke. For the surgeon the probability of getting cancer is mutable: he can actually affect it.
Let’s say the surgeon operates on Alice, his initial probability is 45.5%, after he opens her up and discovers the lesion it becomes 90% (but the actual probability of Alice getting cancer hasn’t changed yet!), and once he removes it, the probability become 1%. That’s an intervention—changing not just the estimate, but the actual underlying probability. Alice’s actual probability used to be 90% and now is 1%. For the surgeon it’s mutable.
This amounts to saying, “the probability that matters is the probability that I will get cancer, given that I have the lesion” or “the probability that matters it the probability that I will get cancer, given that I do not have the lesion.”
That’s what I’m denying. What matters is the probability that you will get cancer, period.
That probability happens to depend on whether you have the lesion or not.
You are confusing things and probabilities. Getting cancer largely depends on having the lesion or not. But the probability of getting cancer depends, not on the thing, but on the probability of having the lesion. And the probability of having the lesion is mutable.
Let me quote your own post where you set up the problem:
This is the probability of getting cancer which depends on the “thing”, that is, the lesion. It does NOT depend on the probability of having a lesion.
“90% of the people” etc is a statement about frequencies, not probabilities.
Let’s look at the context.
You said
That, you are saying, are frequencies and not probabilities. OK, let’s continue:
So why having a lesion (as a function of being a human in this particular population) is a probability and having cancer (as a function of having a lesion) is a frequency?
50% of the people have the lesion. That is a frequency. But if you pick a random person, that person either has the lesion or not. The probability, and not the frequency (which is not meaningful in the case of such an individual), that the random person has the lesion is 50%, because that is our expectation that the person has the lesion.
The parallel still holds. If you pick a random person with the lesion, he will either develop cancer or not. The probability that the random person with the lesion develops cancer is 90%. Is that not so?
“Pick a random person with the lesion” has more than one meaning.
If you pick a random person out of the whole population, then the probability that he will develop cancer is 45.5%. This is true even if he has the lesion, if you do not know that he has the lesion, since the probability is your estimate.
If you pick a random person out of the population of people who have the lesion (and therefore you already know who has the lesion), then the probability that he will develop cancer is 90%.
Basically you are simply pointing out that if you know if you have the lesion, you will be better off smoking. That is true. In the same way, if you know whether Omega put the million in the box or not, you will be better off taking both boxes. Of course since you are maintaining a consistent position, unlike the others here, that isn’t going to bother you.
But if you do not know if you have the lesion, and if you do not know if the million is in the box, an unbiased estimate of your expected utility must say that you will get more utility by not smoking, and by taking one box.
Yes, I two-box (LW tends to treat it as a major moral failing X-D)
And that’s precisely what I disagree with.
The difference is between doing an intervention, that is, changing something in the outside world, and adjusting your estimate which changes nothing in the outside world. “Not smoking” will lead you to adjust your estimate, but it’s not an intervention.
If that’s precisely what you disagree with, can you provide an example where you give numerical estimates of your expected utility for the two choices? Since the condition is that you do not know which is the case, you cannot say “utility X if the lesion or no-million, utility Y if not.” You have to say “estimated utility for one choice: X. Estimated utility for other choice: Y.”
Given the terms of the problem, it it mathematically impossible to provide estimates where two boxing or smoking will be higher, without those estimates being provably biased.
Regarding the supposed intervention, choosing not to smoke is an intervention, and that is what changes your estimate, and therefore your expected utility.
I don’t think that utility functions are a useful approach to human decision-making. However in this context if you specify that smoking is pleasurable (and so provides +X utility), I would expect my utility in the I-choose-to-smoke case to be X higher than in the I-choose-not-to-smoke case.
Note, though, that I would have different utilities for the I-want-to-smoke and I-do-not-want-to-smoke cases.
No, it is not since smoking here is not a cause which affects your chances of cancer.
Utility functions are idealizations. So if someone suggests that I use a specific utility function, I will say, “No, thank you, I intend to remain real, not become an idealization.” But real objects are also not circular or square in a mathematical sense, and that does not prevent circles and squares from being useful in dealing with the real world. In the same way it can be useful to use utility functions, and especially when you are talking about situations which are idealized anyway, like the Smoking Lesion and Newcomb.
Your specific proposal will not work, if it is meant to give specific numbers (and maybe you didn’t intend it to anyway). For example, we know there is about an 85% chance you will get cancer if you smoke, and about a 5% chance that you will get cancer if you don’t, given the terms of the problem. So if not getting cancer has significantly more value than smoking, then it is impossible for your answer to work out numerically, without contradicting those proportions.
And that is what you are trying to do: basically you are assuming that your choice is not even correlated with getting cancer, not only that it is not the cause. But the terms of the problem stipulate that your choice is correlated.
“which affects your chances of cancer”
It most certainly does affect the chance that matters, which is your subjective estimate. I pointed out before that people would act in the same way even if they knew that determinism was true. If it was, the chance of everything, in your sense, would either be 100% or 0%, and nothing you ever did would be an intervention, in your sense. But you would do the same things anyway, which shows that what you care about and act on is your subjective estimate.
The answer you got is the answer. It is basically an assertion that one real number is bigger than another real number. What do you mean by “work out numerically”?
Incorrect. My choice is correlated, it’s just not causal.
So, here is where we disagree. I do not think my subjective estimate is “the chance that matters”. For example, what happens if my subjective estimate is mistaken?
If determinism is true, this sentence make no sense: there is no choice and no option for people to act in any other way.
I will illustrate how your proposal will not work out mathematically. Let’s suppose your default utility is 150, the utility of smoking is 10, and the utility of cancer is negative 100, so that total utility will be as follows:
no smoking and no cancer: 150.
smoking and no cancer: 160.
no smoking and cancer: 50.
smoking and cancer: 60.
You say that you expect to get 10 more utility by smoking than by not smoking. It is easy to see from the above schema why someone would think that, but it is also mistaken. As I said, if you are using a utility function, you do not say, “X utility in this case, Y utility in that case,” but you just calculate an average utility that you expect overall if you make a certain choice. Of course you are free to reject the whole idea of using a utility function at all, as you already suggested, but if you accept the utility function framework for the sake of argument, your proposal will not work, as I am about to explain.
This is how we would calculate your expected utility:
Expected utility of smoking = 150 + 10 - (100 * probability of cancer).
Expected utility of not smoking = 150 - (100 * probability of cancer).
You would like to say that the probability of cancer is either 90% or 1%, depending on the lesion. But that gives you two different values each for smoking and for not smoking, and this does not fit into the expected utility framework. So we have to collapse this to a single probability in each formula (even if the probability in the smoking case might not be the same as in the non-smoking case). What is that probability?
We might say that the probability is 45.5% in both cases, since we know that over the whole population, this number of people will get cancer. In that case, we would get:
Expected utility of smoking = 114.5.
Expected utility of not smoking = 104.5.
This is what you said would happen. However, it is easy to prove that these cannot be unbiased estimates of your utility. We did not stipulate anything about you which is different from the general population, so if these are unbiased estimates, they should come out equal to the average utility of the people who smoke and of the people who do not smoke. But the actual averages are:
Average utility of smokers: 150 + 10 - (100 * .8555) = 74.45.
Average utility of non-smokers: 150 - (100 * .0545) = 144.55.
So why are your values different from these? The reason is that the above calculation takes the probability of 45.5% and leaves it as is, regardless of smoking, which effectively makes your choice an independent variable. In other words, as I said, you are implicitly assuming that your choice is not correlated with the lesion or with cancer, but is an entirely independent variable. This is contrary to the terms of the problem.
Since your choice is correlated with the lesion and therefore also with cancer, the correct way to calculate your expected utility for the two cases is to take the probability of cancer given that particular choice, which leads to the expected utility of 144.55 if you do not smoke, and 74.45 if you do.
You will likely get bad results. You can’t fix that by acting on something different from your subjective estimate, because if you think something else is truer than your subjective estimate, then make that your subjective estimate instead. Your subjective estimate matters not because it is automatically right, but because you don’t and can’t have anything which is more right.
Consider this situation. Someone is going to work every day to earn money to support himself. Then, one day someone convinces him that determinism is true.
Now maybe determinism is true, and maybe it isn’t. The point is that he is now convinced that it is. What do you expect to happen:
A) The person says, “Either I have 100% chance of starving to death, or a 0% chance. So why should I bother to go to work? It will not affect my chances. Even if I starve to death precisely because of not going to work, it will just mean there was a 100% chance of me not going to work in the first place. I still don’t have any intervention that can change my chances of starving.”
B) The person says, “I might starve if I quit work, but I will probably survive if I keep going to work. So I will keep going to work.”
Determinism as such is not inconsistent with either of these. It is true that if determinism is actually true, then whatever he does, he had a 100% chance of doing that. But there is nothing in the abstract picture to tell you which he is going to do. And in any case, I don’t need to assume that determinism is true. The question is what the person will do, who thinks it is true.
Most people, quite rightly, would expect the second thing to happen, and not the first. That shows that we think that other people are going to act on their subjective estimates, not on the possibility of an “intervention” that changes an objective chance. And if we would do the second thing ourselves, that implies that we are acting on subjective estimates and not on objective chances.
This is incorrect, as I pointed out a comment or two upthread.
The problem is that you still refuse to recognize the distinction between an intervention which changes the outside world and an estimate update which changes nothing in the outside world.
And will you also assert that you can change your expected utility by not smoking?
Unroll this, please. What does “bad results” mean? Am I more likely to get cancer if my estimate is wrong?
Huh? I don’t understand either why your example shows this or why do you think these two things are mutually exclusive opposites.
I am explaining why it is correct. Basically you are saying that you cannot change the chance that you will get cancer. But your choice and cancer are correlated variables, so changing your choice changes the expected value of the cancer variable.
You seem to be thinking that it works like this: there are two rows of coins set so that the coin in each row is on the same side as the coin in the other row: when one is heads, the other is heads, and when one is tails the other is tails. Now if you go in and flip over one of the coins, the other will not flip. So the coins are correlated, but flipping one over will not change what the other coin is.
The problem with the coin case is that there is a pre-existing correlation and when you flip a coin, of course it will not flip the other. This means that flipping a coin takes away the correlation. But the correlation between your choice and cancer is a correlation with -your choice-, not with something that comes before your choice. So making a choice determines the expected value of the cancer variable, even if it cannot physically change whether you get cancer. If it did not, your choice would be taking away the correlation, just like you take away the correlation in the coin case. That is why I said you are implicitly assuming that your choice is not correlated with cancer: you are admitting that other people’s choices are correlated, and so are like the rows of coins sitting there, but you think your choice is something that comes later and will take away the correlation in your own case.
I did not refuse to recognize such a distinction, although it is true that your estimate is part of the world, so updating your estimate is also changing the world. But the main point is that the estimate is what matters, not whether or not your action changes the world.
Yes. Before you decide whether to smoke or not, your expected utility is 109.5, because this is the average utility over the whole population. If you decide to smoke, your expected utility will become 74.45, and if you decide not to smoke, it will become 144.55. The reason this can happen is because “expected utility” is an expectation, which means that it is something subjective, which can be changed by the change in your estimate of other probabilities.
But note that it is a real expectation, not a fake one: if your expected utility is 144, you expect to get more utility than if your expected utility is 74. It would be an obvious contradiction to say that your expected utility is higher, but you don’t actually expect to get more.
That depends in what direction your estimate is wrong. You personally would be more likely to get cancer in that situation, since you would mistakenly assume that smoking will not make it more likely that you would get cancer, and therefore you would not avoid smoking.
The person who decides to stop going to work, does that because he cannot change the objective chance that he is going to starve to death. The person who decides to keep going to work has a subjective estimate that he is more likely to survive if he keeps going to work.
This is exactly parallel to the situation we are discussing. Consider a Deterministic Smoking Lesion: 100% of people with the lesion get cancer, no one else gets cancer, and 100% of the people with the lesion choose to smoke, and no one else chooses to smoke. By your way of arguing, it is still true that you cannot change whether you have the lesion or not, so you might as well smoke. That is exactly the same as the person who says that he might as well stop going to work. On the other hand, the person who decides to keep going to work is exactly the same as someone who says, “I cannot physically determine whether I have the lesion or not. However, if I choose not to smoke, I will be able to estimate that I do not have the lesion and will not get cancer. After choosing not to smoke, my subjective estimate of the probability of getting cancer will drop to 0%. So I will not smoke.”
Since we don’t seem to be getting anywhere on this level, let’s try digging deeper (please ignore the balrog superstitions).
Here we are talking about a “choice”. That word/concept is very important in this setup. Let’s dissect it.
I will assert that a great deal of confusion around the Smoking Lesion problem (and others related to it) arises out of the dual meaning attached to the concept of “choice”. There are actually two distinct things happening here.
Thing one is acquiring information. When you decide to smoke, this provides you with new, relevant information and so you update your probabilities and expected utilities accordingly. Note that for this you don’t have to do anything; you just learn, it’s passive acquisition of knowledge. Thing one is what you are focused on.
Thing two is acting, doing something in the physical world. When you decide to smoke, you grab a cigarette (or a pipe, or a cigar, or a blunt, or...) and take a drag. This is an action with potential consequences in reality. In the Smoking Lesion world your action does nothing (except give you a bit of utility) -- it’s not causal and does not change your cancer probabilities.
It is not hard to disassemble a single “choice” into its two components. Let’s stop at the moment of time when you already decided what to do but haven’t done anything yet. At this moment you have already acquired the information—you know what you want / what you have decided—but no action happened. If you don’t want to freeze time imagine the Smoking Lesion problem set on an island where there is absolutely nothing to smoke.
Here the “acquire information” component happened, but the “action” component did not. And does it make the problem easier? Sure, it makes it trivial: you just update on the new information, but there was no action and so we don’t have to concern ourselves with its effect (or lack of it), with causality, with free will, etc.
So I would suggest that the issues with Smoking Lesion are the result of conflating two different things in the single concept of “choice”. Disentangle them and the confusion should—hopefully? -- dissipate or at least lessen.
We can break it down, but I suggest a different scheme. There are three parts, not two. So:
At 1:00 PM, I have the desire to smoke.
At 2:00 PM, I decide to smoke.
At 3:00 PM, I actually smoke.
Number 3 is the action. The choice is number 2, and I will discuss that in a moment. But first, note that the #1 and #2 are not the same. This is clear for two reasons. First, smoking is worth 10 utility for everyone. So everyone has the same desire, but some people decide to smoke, and some people decide not to. Even in real life not everyone who has the desire decides to do it. Some people want it, but decide not to.
Second, when I said that the lesion is correlated with the choice, I meant it is correlated with number 2, not number 1. If it was correlated with number 1, you could say, “I have the desire to smoke. So I likely have the lesion. But I can go ahead and smoke; it won’t make cancer any more likely.” And that argument, in that situation, would be correct. That would be exactly the same as if you knew in advance whether or not you had the lesion. If you already know that, smoking will give you more utility. In the same way, in Newcomb, if you know whether or not the million is in the box before you choose, you should take both boxes.
The argument does not work when the correlation is with number 2, however, and we will see why in a moment.
Number 2 does not include the action (which is number 3), but it includes something besides information. It includes the plan of doing number 3, which plan is the direct cause of number 3. It also includes information, as you say, but you cannot have that information without also planning to do 3. Here is why. When you have the desire, you also have the information: “I have the desire.” And in the same way, when you start planning to smoke, you acquire the information, “I am now planning to smoke.” But you do NOT have that information before you start planning to smoke, since it is not even true until then.
When you are deciding whether to smoke or not, you do not yet have the information about whether you are planning to smoke or not, because you have no such plan yet. And you cannot get that information, without forming the plan at the same time.
The lesion is correlated with the plan. So when 2 happens, you form a plan. And you acquire some information, either “I am now planning to smoke,” or “I am now planning not to smoke.”
And that gives you additional information: either “very probably, I had the lesion an hour ago,” or “very probably, I did not have the lesion an hour ago.”
You suppose that this cannot happen, since either you have the lesion or not, from the beginning. But notice that “at 2:00 PM I start planning to smoke” and “at 2 PM I start planning not to smoke,” cannot co-exist in the same world. And since they only exist in different worlds, there should be nothing surprising about the fact that the past of those worlds is probably different.
I don’t see the point of your number 1. If, as you say, everyone has the desire then it contains no information and is quite irrelevant. I also don’t understand what drives the decision to smoke (or not) if everyone wants the same thing.
I am (and, I assume, most people are) perfectly capable of forming multiple plans and comparing them. Is there really the need for this hair-splitting here?
I could have left it out, but I included it in order to distinguish it from number 2, and because I suspected that you were thinking that the lesion was correlated with the desire. In that situation, you are right that smoking is preferable.
Consider what drives this kind of decision in reality. Some people desire alcohol and drink; some people desire it but do not drink. Normally this is because the ones who drink that it will be good overall, while the ones who don’t, think it will be bad overall.
In this case, we have something similar: people who think “smoking cannot change whether I have the lesion or not, so I might as well smoke” will probably plan to smoke, while people who think “smoking will increase my subjective estimate that I have the lesion,” will probably plan not to smoke.
Looking at this in more detail, consider again the Deterministic Smoking Lesion, where 100% of the people with the lesion choose to smoke, and no one else does. What is driving the decision in this case is obviously the lesion. But you can still ask, “What is going on in their minds when they make the decision?” And in that case it is likely that the lesion makes people think that smoking makes no difference, while not having the lesion lets them notice that smoking is a very bad idea.
In the case we were considering, there was a 95% correlation, not a 100% correlation. But a high correlation is on a continuum with the perfect correlation; just as the lesion is completely driving the decision in the 100% correlation case, it is mostly driving the decision in the 95% case. So basically the lesion tends to make people think like Lumifer, while not having the lesion tends to make people think like entirelyuseless.
If you do that, obviously you not planning to carry out all of those plans, since they are different. You are considering them, not yet planning to do them. Number 2 is once you are sure about which one you plan to do.
You are basically saying that there is no way to know what you are going to do before you actually do it. I don’t find this to be a reasonable position.
Situations when this happens exist—typically they are associated with internal conflict and emotional stress—but they are definitely edge cases. In normal life your deliberate actions are planned (if only a few seconds beforehand) and you can reliably say what you are going to do just before you actually do it.
Humans possess reflection, the ability to introspect, and knowing what you are going to do almost always precedes actually doing it. I am not sure why do you want to keep on conflating knowing and doing.
I am not saying that. Number 2 is different from number 3 -- you can decide whether to smoke, before actually smoking.
What you cannot do, is know what you are going to decide, before you decide it. This is evident from the meaning of deciding to do something, but we can look at a couple examples:
Suppose a chess computer has three options at a particular point. It does not yet know which one it is going to do, and it has not yet decided. Your argument is that it should be able to first find out what it is going to decide, and then decide it. This is a contradiction; suppose it finds out that it is going to do the first. Then it is silly to say it has not yet decided; it has already decided to do the first.
Suppose your friend says, “I have two options for vacation, China and Mexico. I haven’t decided where to go yet, but I already know that I am going to go to China and not to Mexico.” That is silly; if he already knows that he is going to go to China, he has already decided.
In any case, if you could know before deciding (which is absurd), we could just modify the original situation so that the lesion is correlated with knowing that you are going to smoke. Then since I already know I would not smoke, I know I would not have the lesion, while since you presumably know you would smoke, you know you would have the lesion.
So the distinction between acquiring information and action stands?
That’s fine, I never claimed anything like that.
Yes, but not in the sense that you wanted it to. That is, you do not acquire information about the thing the lesion is correlated with, before deciding whether to smoke or not. Because the lesion is correlated with the decision to smoke, and you acquire the information about your decision when you make it.
As I have said before, if you have information in advance about whether you have the lesion, or whether the million is in the box, then it is better to smoke or take both boxes. But if you do not, it is better not to smoke and to take only one box.
I don’t agree with that—what, until the moment I make the decision I have no clue, zero information, about what will I decide? -- but that may be not relevant at the moment.
If I decide to smoke but take no action, is there any problem?
I agree that you can have some probable information about what you will decide before you are finished deciding, but as you noted, that is not relevant anyway.
It isn’t clear what you mean by “is there any problem?” If you mean, is there a problem with this description of the situation, then yes, there is some cause missing. In other words, once you decide to smoke, you will smoke unless something comes up to prevent it: e.g. the cigarettes are missing, or you change your mind, or at least forget about it, or whatever.
If you meant, “am I likely to get cancer,” the answer is yes. Because the lesion is correlated with deciding to smoke, and it causes cancer. So even if something comes up to prevent smoking, you still likely have the lesion, and therefore likely get cancer.
Newcomb is similar: if you decide to take only one box, but then absentmindedly grab them both, the million will be likely to be there. While if you decide to take both, but the second one slips out of your hands, the million will be likely not to be there.
Much of the confusion around the Smoking Lesion centers on whether your choice makes any difference to the outcome. If we disassemble the choice into two components of “learning” and “doing”, it becomes clear (to me, at least) that the “learning” part will cause you to update your estimates and the “doing” part will, er, do nothing. In this framework there is no ambiguity about causality, free will, etc.
You seem to be ignoring the deciding again. But in any case, I agree that causality and free will are irrelevant. I have been saying that all along.
.