In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it).
In the medical Newcomb, causality -doesn’t- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.
Suppose a being in whose power to predict your choices you have enormous confidence. … You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and further-more you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.
Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.
The “simulation” in this case could entirely be in the Predictor’s head. But I concede the point, and shift to a weaker position:
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there’s no reason to elevate that hypothesis.)
In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn’t take your decision into account.
Your decision in the first case should account for the possibility that it accurately predicts your decision—unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.]
Your decision in the second case should not—your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account.
Actually, I am not sure about even that weaker position. The Nozick article stated:
If a state is part of the explanation of deciding to do an action (if the decision is made) and this state is already fixed and determined, then the decision, which has not yet been made, cannot be part of the explanation of the state’s obtaining. So we need not consider the case where prob(state/action) is in the basic explanatory theory, for an already fixed state.
It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is “taking your decision into account”.
It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.
(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being’s predictive powers.)
Imagine a Prisoner’s Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?
In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot “genuinely flow in reverse” in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.
Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.
Hypotheticals are not required to follow the laws of reality, and Newcomb is, in the original problem, definitionally prescient—he knows what is going to happen. You can invent whatever reason you would like for this, but causality flows, not from your current state of being, but from your current state of being to your future decision to Newcomb’s decision right now. Because Newcomb’s decision on what to put in the boxes is predicated, not on your current state of being, but on your future decision.
In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it).
In the medical Newcomb, causality -doesn’t- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.
I don’t know about that. Nozick’s article from 1969 states:
Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.
The “simulation” in this case could entirely be in the Predictor’s head. But I concede the point, and shift to a weaker position:
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there’s no reason to elevate that hypothesis.)
In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn’t take your decision into account.
Your decision in the first case should account for the possibility that it accurately predicts your decision—unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.]
Your decision in the second case should not—your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.
Actually, I am not sure about even that weaker position. The Nozick article stated:
It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is “taking your decision into account”.
It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.
(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being’s predictive powers.)
Imagine a Prisoner’s Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?
In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot “genuinely flow in reverse” in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.
Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.
Hypotheticals are not required to follow the laws of reality, and Newcomb is, in the original problem, definitionally prescient—he knows what is going to happen. You can invent whatever reason you would like for this, but causality flows, not from your current state of being, but from your current state of being to your future decision to Newcomb’s decision right now. Because Newcomb’s decision on what to put in the boxes is predicated, not on your current state of being, but on your future decision.