I’m afraid you misunderstand the difference between the Smoking Lesion and Newcomb’s problem. In the Smoking Lesion, if you are the kind of person who is affected by the thing which causes lung cancer and the desire to smoke, and you resist this desire, you still die of cancer. Your example is just Newcomb’s problem with an infallible forecaster, where if you don’t smoke you don’t die of cancer. This is an inherent difference. They are not the same.
My example may or may not have a forecaster. The story doesn’t say, and that’s the point. Even if it turns out that the box is not forecasting anything, but simply making people do things, the winning move is the same.
The Smoking Lesion is used as a counterexample to evidential decision theory. But understood in the way that you just did, it would not be a counterexample. You have the desire to smoke. So you know you have that desire, and you already know that you likely have the lesion. So if you resist the desire, it does not become less probable that you have the lesion.
In order to be a counterexample, your estimate of the probability that you have the lesion has to change depending on whether you decide to smoke or not smoke. This is different from the situation that you just described.
I know it was the intention, but it doesn’t actually work the way you think.
The thing that causes the confusion is that you introduced an infallible decision maker into the brain that takes all autonomy away from the human (in case of there being no forecaster). This is basically a logical impossibility, which is why I just said “this is newcomb’s problem”. There has to be a forecaster. But okay, suppose not. I’ll show you why this does make a difference.
In Newcomb’s problem, you do in fact influence the contents of the opaque box. Your decision doesn’t, but the fact that you are the kind of person who makes this decision does. Your algorithm does. In the Alien Implant scenario case no forecaster, you don’t affect the state of your box at all.
If there was a forecaster, you could prevent people from dying of cancer by telling them about Timeless Decision Theory. Their choice not to smoke wouldn’t affect the state of their box, but the fact that you convince them would: the forecaster predicts that you prevent them from smoking, therefore they do not smoke, therefore it predicts they don’t smoke, therefore the box is on state 2.
If there was no forecaster, whether or not you smoke has no effect on your box, causally or otherwise. The state of their box is already determined; if you convinced them not to smoke, they would still get cancer and die and the box would be on state 1. Now this never happens in your scenario, which like I said is pretty close to being impossible, hence the confusion.
But it doesn’t matter! Not smoking means you live, smoking means you die!
No, it doesn’t. Suppose the decision maker was infallible. Everyone who smokes dies. Sooner or later people would all stop smoking. And this is where the scenario doesn’t work anymore. Because the number of people dying can’t go down. So either it must be impossible to convince people – in that case, why try? – or the decision maker becomes fallible, in which case your whole argument breaks apart. You don’t smoke and still die.
Think about this fact again: no forecaster means there is a fixed percentage of the population who has their box on state 1. If you are still not convinced, consider that “winning” by not smoking would then have to mean that someone else gets cancer instead, since you cannot change the number of people. Obviously, this is not what happens.
If there was a forecaster and everyone stopped smoking, no-one would die. If everyone one-boxes in Newcomb’s problem, everyone gets rich.
I’m not sure what you mean by “autonomy” here. The scientists guess that the device is reading or writing, but a third possibility is that it is doing both, and is a kind of brain computer interface. In essence you might as well say it is part of the thing there: the human-black box combination has just as much autonomy as normal humans have.
“Suppose the decision maker was infallible. Everyone who smokes dies. Sooner or later people would all stop smoking. And this is where the scenario doesn’t work anymore. Because the number of people dying can’t go down. So either it must be impossible to convince people – in that case, why try? – or the decision maker becomes fallible, in which case your whole argument breaks apart. You don’t smoke and still die.”
In real life it does seem impossible to convince people; there are plenty of people who are stubborn two-boxers, and plenty of people stubbornly insisting on smoking in the smoking lesion, like yourself. So nothing in my experience rules out it being impossible to convince everyone because of the box. Nonetheless, if the box is writing people’s choices, that does not mean it will permanently be impossible to persuade people. It will be impossible to persuade people who already have the opposite written; but if we succeed in the future in persuading everyone, it will mean that everyone in the future had their dial set to the second position. Nothing says that the proportion of people in the population with the dial set one way or another can’t change; it may be being beamed in by aliens, and perhaps you are cooperating with them by trying to persuade people.
“Think about this fact again: no forecaster means there is a fixed percentage of the population who has their box on state 1.”
So what. The proportion of the population who will in real life die of cancer is a fixed proportion; everyone who is going to die of cancer, is going to die of it, and everyone who isn’t, isn’t. That doesn’t mean the proportion can’t change in the future, either in the real life case, or in the box case.
it will mean that everyone in the future had their dial set to the second position.
No it won’t. Nothing you wrote into the story indicates that you can change the box (in case of no forecaster). If you could, that would change everything (and it wouldn’t be the smoking lesion anymore).
Consider Newcomb’s problem by itself. Omega has already flown away. The million is either there, or it is not.
The only sense that you can change whether the million is there is this: if you decide to take two boxes, you are basically deciding to have been a person who would take two boxes, and therefore deciding that Omega would have not put the million. If you decide to take one box, you are basically deciding to have been a person who would take one box, and therefore deciding that Omega would have put the million there.
In my situation, it is the same: you can “determine” whether your dial is set to the first or second position by making a decision about whether to smoke.
Now consider the Omega situation above, except that after Omega has left, Super-Omega steps in, who cannot be predicted by Omega. Super-Omega changes your decision to the opposite of what it was going to be. If this happens, you can two-box and still get the million, or one-box and get nothing, depending on what your original decision was.
In my situation, it is the same: if someone can actually persuade someone to do the opposite of his dial setting, that persuader is basically like Super-Omega here. In other words, this would be exactly what you were talking about, the situation where convincing someone does not help.
What I was saying was this: in the Alien Implant world, the currently existing people have their dials set to the first or second position in a certain proportion. Let’s say that 90% of people have their dials set to the second position (so that most people don’t die of cancer), and 10% have their dials set to the first position. I agree that the story says their dials never change place. But new people are constantly being born, and nothing in the story says that the proportion among the new people cannot be different.
Assuming the non-existence of Super-Omegas, it is true that the proportion of people who choose to smoke will never be different from the proportion of people who have dials set to the first position. That does not mean that you cannot convince an individual not to smoke—it just means that the person you convince already has his dial set to the second position. And it also does not mean that the proportion cannot change, via the existence of new people.
Also, I forgot to remark on your claim that a non-forecasting box is “logically impossible”. Is this supposed to be logically impossible with a 51% correlation?
or a 52% correlation?
…
or a 98% correlation?
or a 99% correlation?
or a 99.999999% correlation?
I suppose you will say that it becomes logically impossible at an 87.636783% correlation, but I would like to see your argument for this.
In my situation, it is the same: you can “determine” whether your dial is set to the first or second position by making a decision about whether to smoke.
No.
You can not. You can’t.
I’m struggling with this reply. I almost decided to stop trying to convince you. I will try one more time, but I need you to consider the possibility that you are wrong before you continue to the next paragraph. Consider the outside view: if you were right, Yudkowksy would be wrong, Anna would be wrong, everyone who read your post here and didn’t upvote this revolutionary, shocking insight would be wrong. Are you sufficiently more intelligent than any of them to be confident in your conclusion? I’m saying this only so you to consider the possibility, nothing more.
You do not have an impact. The reason why you believe otherwise is probably that in Newcomb’s problem, you do have an impact in an unintuitive way, and you generalized this without fully understanding why you have an impact in Newcomb’s problem. It is not because you can magically choose to live in a certain world despite no causal connection.
In Newcomb’s problem, the kind of person you are causally determines the contents of the opaque box, and it causally determines your decision to open them. You have the option to change the kind of person you are, i.e. decide you’ll one-box in Newcomb’s problem at any given moment before you are confronted with it (such as right now), therefore you causally determine how much money you will receive once you play it in the future. The intuitive argument “it is already decided, therefore it doesn’t matter what I do” is actually 100% correct. Your choice to one-box or two-box has no influence on the contents of the opaque box. But the fact that you are the kind of person who one-boxes does, and it happens to be that you (supposedly) can’t two-box without being the kind of person who two-boxes.
In the Smoking Lesion, in your alien scenario, this impact is not there. An independent source determines both the state of your box and your decision to smoke or not to smoke. A snapshot at all humans at any given time, with no forecasting ability, reveals exactly who will die of cancer and who won’t. If superomega comes from the sky and convinces everyone to stop smoking, the exact same people will die as before. If everyone stopped smoking immediately, the exact same people will die as before. In the future, the exact same people who would otherwise have died still die. People with the box on the wrong state who decide to stop smoking still die.
Consider the outside view: if you were right, Yudkowksy would be wrong, Anna would be wrong, everyone who read your post here and didn’t upvote this revolutionary, shocking insight would be wrong. Are you sufficiently more intelligent than any of them to be confident in your conclusion?
This outside view is too limited; there are plenty of extremely intelligent people outside Less Wrong circles who agree with me. This is why I said from the beginning that the common view here came from the desire to agree with Eliezer. Notice that no one would agree and upvote without first having to disagree with all those others, and they are unlikely to do that because they have the limited outside view you mention here: they would not trust themselves to agree with me, even if it was objectively convincing.
Scott Alexander is probably one of the most unbiased people ever to be involved with Less Wrong. Look at this comment:
But keeping the original premise that it’s known that out of everyone who’s ever lived in all of history, every single virtuous Calvinist has ended up in Heaven and every single sinful Calvinist end has ended up damned—I still choose to be a virtuous Calvinist. And if the decision theorists don’t like that, they can go to hell.
Likewise, if they don’t like not smoking in the situation here, they can die of cancer.
“You have the option to change the kind of person you are, i.e. decide you’ll one-box in Newcomb’s problem at any given moment before you are confronted with it (such as right now), therefore you causally determine how much money you will receive once you play it in the future.”
If I am not the kind of person who would accept this reasoning, I can no more make myself into the kind of person who would accept this reasoning (even right now), than I can make myself into a person who has the dial set to the second position. Both are facts about the world: whether you have the dial set in a certain position, and whether you are the kind of person who could accept that reasoning.
And on the other hand, I can accept the reasoning, and I can choose not to smoke: I will equally be the kind of person who takes one box, and I will be a person who would have the dial in the second position.
My take on EDT is that it’s, at its core, vague about probability estimation. If the probabilities are accurate forecasts based on detailed causal models of the world, then it works at least as well as CDT. But if there’s even a small gap between the model and reality, it can behave badly.
E.g. if you like vanilla ice cream but the people who get chocolate really enjoy it, you might not endorse an EDT algorithm that thinks of of probabilities as frequency within a reference class. I see the smoking lesion as a more sophisticated version of this same issue.
But then if probabilities are estimated via causal model, EDT has exactly the same problem with Newcomb’s Omega as CDT, because the problem with Omega lies in the incorrect estimation of probabilities when someone can read your source code.
So I see these as two different problems with two different methods of assigning probabilities in an underspecified EDT. This means that I predict there’s an even more interesting version of your example where both methods fail. The causal modelers assume that the past can’t predict their choice, and the reference class forecasters get sidetracked by options that put them in a good reference class without having causal impact on what they care about.
It is not the responsibility of a decision theory to tell you how to form opinions about the world; it should tell you how to use the opinions you have. EDT does not mean reference class forecasting; it means expecting utility according to the opinions you would actually have if you did the thing, not ignoring the fact that doing the thing would give you information.
Or in other words, it means acting on your honest opinion of what will give you the best result, and not a dishonest opinion formed by pretending that your opinions wouldn’t change if you did something.
I think this deflationary conception of decision theory has serious problems. First is that because it doesn’t pin down a decision-making algorithm, it’s hard to talk about what choices it makes—you can argue for choices but you can’t demonstrate them without showing how they’re generated in full. Second is that it introduces more opportunities to fool yourself with verbal reasoning. Third because historically I think it’s resulted in a lot of wasted words in philosophy journals, although maybe this is just objection one again.
I’m afraid you misunderstand the difference between the Smoking Lesion and Newcomb’s problem. In the Smoking Lesion, if you are the kind of person who is affected by the thing which causes lung cancer and the desire to smoke, and you resist this desire, you still die of cancer. Your example is just Newcomb’s problem with an infallible forecaster, where if you don’t smoke you don’t die of cancer. This is an inherent difference. They are not the same.
My example may or may not have a forecaster. The story doesn’t say, and that’s the point. Even if it turns out that the box is not forecasting anything, but simply making people do things, the winning move is the same.
The Smoking Lesion is used as a counterexample to evidential decision theory. But understood in the way that you just did, it would not be a counterexample. You have the desire to smoke. So you know you have that desire, and you already know that you likely have the lesion. So if you resist the desire, it does not become less probable that you have the lesion.
In order to be a counterexample, your estimate of the probability that you have the lesion has to change depending on whether you decide to smoke or not smoke. This is different from the situation that you just described.
I know it was the intention, but it doesn’t actually work the way you think.
The thing that causes the confusion is that you introduced an infallible decision maker into the brain that takes all autonomy away from the human (in case of there being no forecaster). This is basically a logical impossibility, which is why I just said “this is newcomb’s problem”. There has to be a forecaster. But okay, suppose not. I’ll show you why this does make a difference.
In Newcomb’s problem, you do in fact influence the contents of the opaque box. Your decision doesn’t, but the fact that you are the kind of person who makes this decision does. Your algorithm does. In the Alien Implant scenario case no forecaster, you don’t affect the state of your box at all.
If there was a forecaster, you could prevent people from dying of cancer by telling them about Timeless Decision Theory. Their choice not to smoke wouldn’t affect the state of their box, but the fact that you convince them would: the forecaster predicts that you prevent them from smoking, therefore they do not smoke, therefore it predicts they don’t smoke, therefore the box is on state 2.
If there was no forecaster, whether or not you smoke has no effect on your box, causally or otherwise. The state of their box is already determined; if you convinced them not to smoke, they would still get cancer and die and the box would be on state 1. Now this never happens in your scenario, which like I said is pretty close to being impossible, hence the confusion.
But it doesn’t matter! Not smoking means you live, smoking means you die!
No, it doesn’t. Suppose the decision maker was infallible. Everyone who smokes dies. Sooner or later people would all stop smoking. And this is where the scenario doesn’t work anymore. Because the number of people dying can’t go down. So either it must be impossible to convince people – in that case, why try? – or the decision maker becomes fallible, in which case your whole argument breaks apart. You don’t smoke and still die.
Think about this fact again: no forecaster means there is a fixed percentage of the population who has their box on state 1. If you are still not convinced, consider that “winning” by not smoking would then have to mean that someone else gets cancer instead, since you cannot change the number of people. Obviously, this is not what happens.
If there was a forecaster and everyone stopped smoking, no-one would die. If everyone one-boxes in Newcomb’s problem, everyone gets rich.
I’m not sure what you mean by “autonomy” here. The scientists guess that the device is reading or writing, but a third possibility is that it is doing both, and is a kind of brain computer interface. In essence you might as well say it is part of the thing there: the human-black box combination has just as much autonomy as normal humans have.
“Suppose the decision maker was infallible. Everyone who smokes dies. Sooner or later people would all stop smoking. And this is where the scenario doesn’t work anymore. Because the number of people dying can’t go down. So either it must be impossible to convince people – in that case, why try? – or the decision maker becomes fallible, in which case your whole argument breaks apart. You don’t smoke and still die.”
In real life it does seem impossible to convince people; there are plenty of people who are stubborn two-boxers, and plenty of people stubbornly insisting on smoking in the smoking lesion, like yourself. So nothing in my experience rules out it being impossible to convince everyone because of the box. Nonetheless, if the box is writing people’s choices, that does not mean it will permanently be impossible to persuade people. It will be impossible to persuade people who already have the opposite written; but if we succeed in the future in persuading everyone, it will mean that everyone in the future had their dial set to the second position. Nothing says that the proportion of people in the population with the dial set one way or another can’t change; it may be being beamed in by aliens, and perhaps you are cooperating with them by trying to persuade people.
“Think about this fact again: no forecaster means there is a fixed percentage of the population who has their box on state 1.”
So what. The proportion of the population who will in real life die of cancer is a fixed proportion; everyone who is going to die of cancer, is going to die of it, and everyone who isn’t, isn’t. That doesn’t mean the proportion can’t change in the future, either in the real life case, or in the box case.
No it won’t. Nothing you wrote into the story indicates that you can change the box (in case of no forecaster). If you could, that would change everything (and it wouldn’t be the smoking lesion anymore).
I don’t think you understood.
Consider Newcomb’s problem by itself. Omega has already flown away. The million is either there, or it is not.
The only sense that you can change whether the million is there is this: if you decide to take two boxes, you are basically deciding to have been a person who would take two boxes, and therefore deciding that Omega would have not put the million. If you decide to take one box, you are basically deciding to have been a person who would take one box, and therefore deciding that Omega would have put the million there.
In my situation, it is the same: you can “determine” whether your dial is set to the first or second position by making a decision about whether to smoke.
Now consider the Omega situation above, except that after Omega has left, Super-Omega steps in, who cannot be predicted by Omega. Super-Omega changes your decision to the opposite of what it was going to be. If this happens, you can two-box and still get the million, or one-box and get nothing, depending on what your original decision was.
In my situation, it is the same: if someone can actually persuade someone to do the opposite of his dial setting, that persuader is basically like Super-Omega here. In other words, this would be exactly what you were talking about, the situation where convincing someone does not help.
What I was saying was this: in the Alien Implant world, the currently existing people have their dials set to the first or second position in a certain proportion. Let’s say that 90% of people have their dials set to the second position (so that most people don’t die of cancer), and 10% have their dials set to the first position. I agree that the story says their dials never change place. But new people are constantly being born, and nothing in the story says that the proportion among the new people cannot be different.
Assuming the non-existence of Super-Omegas, it is true that the proportion of people who choose to smoke will never be different from the proportion of people who have dials set to the first position. That does not mean that you cannot convince an individual not to smoke—it just means that the person you convince already has his dial set to the second position. And it also does not mean that the proportion cannot change, via the existence of new people.
Also, I forgot to remark on your claim that a non-forecasting box is “logically impossible”. Is this supposed to be logically impossible with a 51% correlation?
or a 52% correlation? … or a 98% correlation? or a 99% correlation? or a 99.999999% correlation?
I suppose you will say that it becomes logically impossible at an 87.636783% correlation, but I would like to see your argument for this.
No.
You can not. You can’t.
I’m struggling with this reply. I almost decided to stop trying to convince you. I will try one more time, but I need you to consider the possibility that you are wrong before you continue to the next paragraph. Consider the outside view: if you were right, Yudkowksy would be wrong, Anna would be wrong, everyone who read your post here and didn’t upvote this revolutionary, shocking insight would be wrong. Are you sufficiently more intelligent than any of them to be confident in your conclusion? I’m saying this only so you to consider the possibility, nothing more.
You do not have an impact. The reason why you believe otherwise is probably that in Newcomb’s problem, you do have an impact in an unintuitive way, and you generalized this without fully understanding why you have an impact in Newcomb’s problem. It is not because you can magically choose to live in a certain world despite no causal connection.
In Newcomb’s problem, the kind of person you are causally determines the contents of the opaque box, and it causally determines your decision to open them. You have the option to change the kind of person you are, i.e. decide you’ll one-box in Newcomb’s problem at any given moment before you are confronted with it (such as right now), therefore you causally determine how much money you will receive once you play it in the future. The intuitive argument “it is already decided, therefore it doesn’t matter what I do” is actually 100% correct. Your choice to one-box or two-box has no influence on the contents of the opaque box. But the fact that you are the kind of person who one-boxes does, and it happens to be that you (supposedly) can’t two-box without being the kind of person who two-boxes.
In the Smoking Lesion, in your alien scenario, this impact is not there. An independent source determines both the state of your box and your decision to smoke or not to smoke. A snapshot at all humans at any given time, with no forecasting ability, reveals exactly who will die of cancer and who won’t. If superomega comes from the sky and convinces everyone to stop smoking, the exact same people will die as before. If everyone stopped smoking immediately, the exact same people will die as before. In the future, the exact same people who would otherwise have died still die. People with the box on the wrong state who decide to stop smoking still die.
Also, about this:
This outside view is too limited; there are plenty of extremely intelligent people outside Less Wrong circles who agree with me. This is why I said from the beginning that the common view here came from the desire to agree with Eliezer. Notice that no one would agree and upvote without first having to disagree with all those others, and they are unlikely to do that because they have the limited outside view you mention here: they would not trust themselves to agree with me, even if it was objectively convincing.
Scott Alexander is probably one of the most unbiased people ever to be involved with Less Wrong. Look at this comment:
Likewise, if they don’t like not smoking in the situation here, they can die of cancer.
“You have the option to change the kind of person you are, i.e. decide you’ll one-box in Newcomb’s problem at any given moment before you are confronted with it (such as right now), therefore you causally determine how much money you will receive once you play it in the future.”
If I am not the kind of person who would accept this reasoning, I can no more make myself into the kind of person who would accept this reasoning (even right now), than I can make myself into a person who has the dial set to the second position. Both are facts about the world: whether you have the dial set in a certain position, and whether you are the kind of person who could accept that reasoning.
And on the other hand, I can accept the reasoning, and I can choose not to smoke: I will equally be the kind of person who takes one box, and I will be a person who would have the dial in the second position.
My take on EDT is that it’s, at its core, vague about probability estimation. If the probabilities are accurate forecasts based on detailed causal models of the world, then it works at least as well as CDT. But if there’s even a small gap between the model and reality, it can behave badly.
E.g. if you like vanilla ice cream but the people who get chocolate really enjoy it, you might not endorse an EDT algorithm that thinks of of probabilities as frequency within a reference class. I see the smoking lesion as a more sophisticated version of this same issue.
But then if probabilities are estimated via causal model, EDT has exactly the same problem with Newcomb’s Omega as CDT, because the problem with Omega lies in the incorrect estimation of probabilities when someone can read your source code.
So I see these as two different problems with two different methods of assigning probabilities in an underspecified EDT. This means that I predict there’s an even more interesting version of your example where both methods fail. The causal modelers assume that the past can’t predict their choice, and the reference class forecasters get sidetracked by options that put them in a good reference class without having causal impact on what they care about.
It is not the responsibility of a decision theory to tell you how to form opinions about the world; it should tell you how to use the opinions you have. EDT does not mean reference class forecasting; it means expecting utility according to the opinions you would actually have if you did the thing, not ignoring the fact that doing the thing would give you information.
Or in other words, it means acting on your honest opinion of what will give you the best result, and not a dishonest opinion formed by pretending that your opinions wouldn’t change if you did something.
I think this deflationary conception of decision theory has serious problems. First is that because it doesn’t pin down a decision-making algorithm, it’s hard to talk about what choices it makes—you can argue for choices but you can’t demonstrate them without showing how they’re generated in full. Second is that it introduces more opportunities to fool yourself with verbal reasoning. Third because historically I think it’s resulted in a lot of wasted words in philosophy journals, although maybe this is just objection one again.
A parable illustrating the identity of Newcomb’s problem and the Smoking Lesion.