Suppose a being in whose power to predict your choices you have enormous confidence. … You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and further-more you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.
Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.
The “simulation” in this case could entirely be in the Predictor’s head. But I concede the point, and shift to a weaker position:
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there’s no reason to elevate that hypothesis.)
In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn’t take your decision into account.
Your decision in the first case should account for the possibility that it accurately predicts your decision—unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.]
Your decision in the second case should not—your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account.
Actually, I am not sure about even that weaker position. The Nozick article stated:
If a state is part of the explanation of deciding to do an action (if the decision is made) and this state is already fixed and determined, then the decision, which has not yet been made, cannot be part of the explanation of the state’s obtaining. So we need not consider the case where prob(state/action) is in the basic explanatory theory, for an already fixed state.
It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is “taking your decision into account”.
It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.
(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being’s predictive powers.)
Imagine a Prisoner’s Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?
I don’t know about that. Nozick’s article from 1969 states:
Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.
The “simulation” in this case could entirely be in the Predictor’s head. But I concede the point, and shift to a weaker position:
In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it’s predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there’s no reason to elevate that hypothesis.)
In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn’t take your decision into account.
Your decision in the first case should account for the possibility that it accurately predicts your decision—unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.]
Your decision in the second case should not—your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.
Actually, I am not sure about even that weaker position. The Nozick article stated:
It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is “taking your decision into account”.
It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.
(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being’s predictive powers.)
Imagine a Prisoner’s Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?