It’s questionable whether the smoking lesion problem is a valid counterexample to EDT in the first place.
It can be argued that the problem is underspecified, and it requires additional assumptions for EDT to determine an outcome:
A reasonable assumption is that the rare gene affects smoking only though its action on Susan’s preferences: “Susan has the generic lesion” and “Susan smokes” are conditionally independent events given “Susan likes to smoke”. Since the agent is assumed to know their own preferences, the decision to smoke given that Susan likes to smoke doesn’t increase the probability that she has the genetic lesion, hence EDT correctly chooses “smoke”.
But consider a different set of assumptions: an evil Omega examines Susan’s embryo even before she is born, it determines whether she will smoke and if she will, it puts in her DNA an otherwise rare genetic lesion that will likely give her cancer but causes no other detectable effect.
Please note that this is not a variation of the smoking lesion problem, it’s merely a specification which is still perfectly consistent with the original formulation: the genetic lesion is positively correlated both with smoking and cancer.
What decision does EDT choose in this case? It chooses “Don’t smoke”, and arguably correctly so, since that with these assumptions the problem is essentially a rephrasing of Newcom’s problem where “Smoke” = “Two-box” and “Don’t smoke” = “One-box”.
It’s questionable whether the smoking lesion problem is a valid counterexample to EDT in the first place. It can be argued that the problem is underspecified, and it requires additional assumptions for EDT to determine an outcome:
I agree with this analysis. The most interesting case is a third variation, in which there is no evil Omega, but the organic genetic lesion causes not only a preference for smoking but also weakness in resisting that preference, propensity for rationalizing yourself into smoking, etc. We can assume happens in such a way that “Susan actively chooses to smoke” is still new positive evidence to a third-party observer that Susan has the lesion, over and above the previous evidence provided by knowledge about Susan’s preferences (and conscious reasonings, etc) before she actively makes the choice. I think in this case Susan should treat the case as a Newcomb problem and choose not to smoke, but it is less intuitive without an Omega calling the shots.
See my reply to Khoth. You can call this a functional causal arrow if you want, but you can reanalyze it as a standard causal arrow from your original state to both your decision and (through Omega) the money. The same thing happens in my version of the smoking problem.
Suppose I’m a one-boxer, Omega looks at me, and is sure that I’m a one-boxer. But then, after Omega fills the boxes, Mentok comes by, takes control of me, and forces me to two-box. Is there a million dollars in the second box?
Er… yes? Assuming Omega could not foresee Mentok coming in and changing the situation? No, if he could foresee this, but then the relevant original state includes both me and Mentok. I’m not sure I see the point.
Let’s take a step back, what are we discussing? I claimed that my version of the smoking problem in which the gene is correlated with your decision to smoke (not just with your preference for it) is like the Newcomb problem, and that if you are a one-bower in the latter you should not smoke in the former. My argument for this was that both cases are isomorphic in that there is an earlier causal node causing, through separate channels, both your decision and the payoff. What is the problem with this viewpoint?
Er… yes? Assuming Omega could not foresee Mentok coming in and changing the situation? No, if he could foresee this, but then the relevant original state includes both me and Mentok. I’m not sure I see the point.
Then Omega is not a perfect predictor, and thus there’s a contradiction in the problem statement.
My argument for this was that both cases are isomorphic in that there is an earlier causal node causing, through separate channels, both your decision and the payoff.
The strength of the connection between the causal nodes makes a big difference in practice. If the smoking gene doesn’t make you more likely to smoke, but makes it absolutely certain that you will smoke- why represent those as separate nodes?
I am sorry, I cannot understand what you are getting at in either of your paragraphs.
In the first one, are you arguing that the original Newcomb problem is contradictory? The problem assumes that Omega can predict your behavior. Presumablythis is not done magically but by knowing your initial state and running some sort of simulation. Here the initial state is defined as everything that affects your choice (otherwise Omega wouldn’t be accurate) so if there is a Mentok, his initial state is included as well. I fail to see any contradiction.
In the second one, I agree with “The strength of the connection between the causal nodes makes a big difference in practice.” but fail to see the relevance (I would say we are assuming in these problems that the connection is very strong in both Newcomb and Smoking), and cannot parse at all your reasoning in the last sentence. Could you elaborate?
In the first one, are you arguing that the original Newcomb problem is contradictory?
My argument is that Newcomb’s Problem rests on these assumptions:
Omega is a perfect predictor of whether or not you will take the second box.
Omega’s prediction determines whether or not he fills the second box.
There’s a hidden assumption that many people import: “Causality cannot flow backwards in time,” or “Omega doesn’t uses magic,” which makes the problem troubling. If you draw a causal arrow from your choice to the second box, then everything is clear and the decision is obvious.
If you try to import other nodes, then you run into trouble: if Omega’s prediction is based on some third thing, it either is the choice in disguise (and so you’ve complicated the problem to avoid magic by waving your hands) or it could be fooled (and so it’s not a Newcomb’s Problem so much as a “how can I trick Omega?” problem). You don’t want to be in the situation where you’re changing your node definition to deal with “what if X happens?”
For example, consider the question of what happens when you commit to a mixed strategy- flipping an unentangled qubit, and one-boxing on up and two-boxing on down. If Omega uses magic, he predicts the outcome of the qubit, and you either get a thousand dollars or a million dollars. If Omega uses some deterministic prediction method, he can’t be certain to predict correctly- so you can’t describe the original Newcomb’s problem that way, and any inferences you draw about the pseudo-Newcomb’s problem may not generalize.
OK, I understand now. I agree that the problem needs a bit of specification. If we treat the assumption that Omega is a perfect (or quasi-perfect) predictor as fixed, I see two possibilities:
Omega predicts by taking a sufficiently inclusive initial state and running a simulation. The initial state must include everything that predictably affects your choice (e.g. Mentok, or classical coin flips), so there is no trickery like “adding nodes” possible. The assumption of a Predictor requires that your choice is deterministic: either quantum mechanics is wrong, or Omega only offers the problem in the first place to people whose choice will not depend on quantum effects. So you cannot (or “will not”) use the qubit strategy.
Omega predicts by magic. I don’t know how magic works, but assuming it is more or less my choice affecting the prediction directly in an effective back-in-time causation, then the one-box solution becomes trivial as you say.
So I think the first interpretation is the one that makes the problem interesting. I was assuming it in my analogy to Smoking.
Maybe I worded it badly. What I meant was, in Newcomb’s problem, Omega studies you to determine the decision you will make, and puts stuff in the boxes based on that. In the lesion problem, there’s no mechanism by which the decision you make affects what genes you have.
Omega makes the prediction by looking at your state before setting the boxes. Let us call P the property of your state that is critical for his decision. It may be the whole microscopic state of your brain and environment, or it might be some higher-level property like “firm belief that one-boxing is the correct choice”. In any case, there must be such a P, and it is from P that the causal arrow to the money in the box goes, not from your decision. Both your decision and the money in the box are correlated with P. Likewise, in my version of the smoking problem both your decision to smoke and cancer are correlated with the genetic lesion. So I think my version of the problem is isomorphic to Newcomb.
It’s questionable whether the smoking lesion problem is a valid counterexample to EDT in the first place. It can be argued that the problem is underspecified, and it requires additional assumptions for EDT to determine an outcome:
A reasonable assumption is that the rare gene affects smoking only though its action on Susan’s preferences: “Susan has the generic lesion” and “Susan smokes” are conditionally independent events given “Susan likes to smoke”. Since the agent is assumed to know their own preferences, the decision to smoke given that Susan likes to smoke doesn’t increase the probability that she has the genetic lesion, hence EDT correctly chooses “smoke”.
But consider a different set of assumptions: an evil Omega examines Susan’s embryo even before she is born, it determines whether she will smoke and if she will, it puts in her DNA an otherwise rare genetic lesion that will likely give her cancer but causes no other detectable effect.
Please note that this is not a variation of the smoking lesion problem, it’s merely a specification which is still perfectly consistent with the original formulation: the genetic lesion is positively correlated both with smoking and cancer.
What decision does EDT choose in this case? It chooses “Don’t smoke”, and arguably correctly so, since that with these assumptions the problem is essentially a rephrasing of Newcom’s problem where “Smoke” = “Two-box” and “Don’t smoke” = “One-box”.
I agree.
I agree with this analysis. The most interesting case is a third variation, in which there is no evil Omega, but the organic genetic lesion causes not only a preference for smoking but also weakness in resisting that preference, propensity for rationalizing yourself into smoking, etc. We can assume happens in such a way that “Susan actively chooses to smoke” is still new positive evidence to a third-party observer that Susan has the lesion, over and above the previous evidence provided by knowledge about Susan’s preferences (and conscious reasonings, etc) before she actively makes the choice. I think in this case Susan should treat the case as a Newcomb problem and choose not to smoke, but it is less intuitive without an Omega calling the shots.
In that case she should still smoke. There’s no causal arrow going from “choosing to smoke” to “getting cancer”.
There is no causal arrow in Newcomb from choosing two boxes to the second one being empty.
Functionally, there is; it’s called “Omega is a perfect predictor.”
See my reply to Khoth. You can call this a functional causal arrow if you want, but you can reanalyze it as a standard causal arrow from your original state to both your decision and (through Omega) the money. The same thing happens in my version of the smoking problem.
Suppose I’m a one-boxer, Omega looks at me, and is sure that I’m a one-boxer. But then, after Omega fills the boxes, Mentok comes by, takes control of me, and forces me to two-box. Is there a million dollars in the second box?
Er… yes? Assuming Omega could not foresee Mentok coming in and changing the situation? No, if he could foresee this, but then the relevant original state includes both me and Mentok. I’m not sure I see the point.
Let’s take a step back, what are we discussing? I claimed that my version of the smoking problem in which the gene is correlated with your decision to smoke (not just with your preference for it) is like the Newcomb problem, and that if you are a one-bower in the latter you should not smoke in the former. My argument for this was that both cases are isomorphic in that there is an earlier causal node causing, through separate channels, both your decision and the payoff. What is the problem with this viewpoint?
Then Omega is not a perfect predictor, and thus there’s a contradiction in the problem statement.
The strength of the connection between the causal nodes makes a big difference in practice. If the smoking gene doesn’t make you more likely to smoke, but makes it absolutely certain that you will smoke- why represent those as separate nodes?
I am sorry, I cannot understand what you are getting at in either of your paragraphs.
In the first one, are you arguing that the original Newcomb problem is contradictory? The problem assumes that Omega can predict your behavior. Presumablythis is not done magically but by knowing your initial state and running some sort of simulation. Here the initial state is defined as everything that affects your choice (otherwise Omega wouldn’t be accurate) so if there is a Mentok, his initial state is included as well. I fail to see any contradiction.
In the second one, I agree with “The strength of the connection between the causal nodes makes a big difference in practice.” but fail to see the relevance (I would say we are assuming in these problems that the connection is very strong in both Newcomb and Smoking), and cannot parse at all your reasoning in the last sentence. Could you elaborate?
My argument is that Newcomb’s Problem rests on these assumptions:
Omega is a perfect predictor of whether or not you will take the second box.
Omega’s prediction determines whether or not he fills the second box.
There’s a hidden assumption that many people import: “Causality cannot flow backwards in time,” or “Omega doesn’t uses magic,” which makes the problem troubling. If you draw a causal arrow from your choice to the second box, then everything is clear and the decision is obvious.
If you try to import other nodes, then you run into trouble: if Omega’s prediction is based on some third thing, it either is the choice in disguise (and so you’ve complicated the problem to avoid magic by waving your hands) or it could be fooled (and so it’s not a Newcomb’s Problem so much as a “how can I trick Omega?” problem). You don’t want to be in the situation where you’re changing your node definition to deal with “what if X happens?”
For example, consider the question of what happens when you commit to a mixed strategy- flipping an unentangled qubit, and one-boxing on up and two-boxing on down. If Omega uses magic, he predicts the outcome of the qubit, and you either get a thousand dollars or a million dollars. If Omega uses some deterministic prediction method, he can’t be certain to predict correctly- so you can’t describe the original Newcomb’s problem that way, and any inferences you draw about the pseudo-Newcomb’s problem may not generalize.
OK, I understand now. I agree that the problem needs a bit of specification. If we treat the assumption that Omega is a perfect (or quasi-perfect) predictor as fixed, I see two possibilities:
Omega predicts by taking a sufficiently inclusive initial state and running a simulation. The initial state must include everything that predictably affects your choice (e.g. Mentok, or classical coin flips), so there is no trickery like “adding nodes” possible. The assumption of a Predictor requires that your choice is deterministic: either quantum mechanics is wrong, or Omega only offers the problem in the first place to people whose choice will not depend on quantum effects. So you cannot (or “will not”) use the qubit strategy.
Omega predicts by magic. I don’t know how magic works, but assuming it is more or less my choice affecting the prediction directly in an effective back-in-time causation, then the one-box solution becomes trivial as you say.
So I think the first interpretation is the one that makes the problem interesting. I was assuming it in my analogy to Smoking.
Maybe I worded it badly. What I meant was, in Newcomb’s problem, Omega studies you to determine the decision you will make, and puts stuff in the boxes based on that. In the lesion problem, there’s no mechanism by which the decision you make affects what genes you have.
Omega makes the prediction by looking at your state before setting the boxes. Let us call P the property of your state that is critical for his decision. It may be the whole microscopic state of your brain and environment, or it might be some higher-level property like “firm belief that one-boxing is the correct choice”. In any case, there must be such a P, and it is from P that the causal arrow to the money in the box goes, not from your decision. Both your decision and the money in the box are correlated with P. Likewise, in my version of the smoking problem both your decision to smoke and cancer are correlated with the genetic lesion. So I think my version of the problem is isomorphic to Newcomb.