Well, Nozick’s formulation in 1969, which popularized the problem in philosophy, went ahead and specified that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Which means smuggling in a theory of unidirectional causality into the very setup itself, which explains how it winds up called “Newcomb’s Paradox” instead of Newcomb’s Problem.
That is not a specification, it is a supposition. It is the same supposition CDT makes (rejection of backwards causality) and leads to the same result of not playing Newcomb.
It’s like playing chess and saying “dude, my rook can go diagonal, too!”
CDT doesn’t change the payoffs. If it takes the single box and there is money in in, it still receives a million dollars; if it also takes the other box, it will receive 10,000 additional dollars. These are the standard payoffs to Newcomb’s Problem.
What you are assuming is that your decision affects Omega’s prediction. While it is nice that your intuition is so strong, CDTers disagree with this claim, as your decision has no causal impact on Omega’s prediction.
This formulation of Newcomb’s Problem may clarify the wrong intuition:
Suppose that the boxes are transparent, ie you can already see whether or not there’s a million dollars present. Suppose you see that there is a million dollars present; then you have no reason not to grab an additional 10,000- after all, per Givewell, that probably let’s you save ~8 lives. You wouldn’t want to randomly toss away 8 lives, would you? And the million is already present, you can see it with your own eyes, it’s there no matter what you do. If you take both boxes, the money won’t magically vanish- the payoff matrix is that, if the money’s there, you get it, end of story.
But, suppose there isn’t a million dollars there. Are you really going to walk away empty-handed, when you know, for sure, that you won’t be getting a million dollars? After all, the money’s already not there, Omega will not be paying a million dollars, no matter what happens. So your choice is between 0$ and 10,000$- again, are you really going to toss away those ~8 lives for nothing, no reason, no gain?
This is equivalent to the standard formulation: you may not be able to see the million, but it is already either present or not present.
You’re not talking about Newcomb. In Newcomb, you don’t get any “additional” $1000; these are the only dollars you get, because the $1000000 do magically vanish if you take the “additional” $1000.
CDT, then, isn’t aware of the payoff matrix. It reasons as follows: Either Omega put money in boxes A and B, or only in box B. If Omega put money in both boxes, I’m better off taking both boxes. If Omega put money only in box B, I should also take both boxes instead of only box A. CDT doesn’t deal with the fact that which of these two games it’s playing depends on what it will choose to do in each case.
No, this is false. CDT is the one using the standard payoff matrix, and you are the one refusing to use the standard payoff matrix and substituting your own.
In particular: the money is either already there, or not already there. Once the game has begun, the Predictor is powerless to change things.
The standard payoff matrix for Newcomb is therefore as follows:
Omega predicts you take two boxes, you take two boxes, you get $n>0.
Omega predicts you take two boxes, you take two boxes, you get 0.
Omega predicts you take one box, you take one box, you get $m>n.
Omega predicts you take one box, you take two boxes, you get $m+n>m.
The problem becomes trivial if, as you are doing, you refuse to consider the second and fourth outcomes. However, you are then not playing Newcomb’s Problem.
No, only then am I playing Newcomb. What you’re playing is weak Newcomb, where you assign a probability of x>0 for Omega being wrong, at which point this becomes simple math where CDT will give you the correct result, whatever that may turn out to be.
No, you are assuming that your decision can change what’s in the box, which everybody agrees is wrong: the problem statement is that you cannot change what’s in the million-dollar box.
Also, what you describe as “weak Newcomb” is the standard formulation: Nozick’s original problem stated that the Predictor was “almost always” right. CDT still gives the wrong answer in simple Newcomb, as its decision cannot affect what’s in the box.
Nozick’s original problem stated that the Predictor was “almost always” right.
That’s not the “original problem”, that’s just the fleshed-out introduction to “Newcomb’s Problem and Two Principles of Choice” where he talks about aliens and other stuff that has about as much to do with Newcomb as prisoners have to do with the Prisoner’s Dilemma. Then after outlining some common intuitive answers, he goes on a mathematical tangent and later returns to the question of what one should do in Newcomb with this paragraph:
Now, at last, to return to Newcomb’s example of the predictor. If one believes, for this case, that there is backwards causality, that your choice causes the money to be there or not, that it causes him to have made the prediction that he made, then there is no problem. One takes only what is in the second box. Or if one believes that the way the predictor works is by looking into the future; he, in some sense, sees what you are doing, and hence is no more likely to be wrong about what you do than someone else who is standing there at the time and watching you, and would normally see you, say, open only one box, then there is no problem. You take only what is in the second box. But suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what he did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made. So let us agree that the predictor works as follows: He observes you sometime before you are faced with the choice, examines you with complicated apparatus, etc., and then uses his theory to predict on the basis of this state you were in, what choice you would make later when faced with the choice. Your deciding to do as you do is not part of the explanation of why he makes the prediction he does, though your being in a certain state earlier, is part of the explanation of why he makes the prediction he does, and why you decide as you do.
I believe that one should take what is in both boxes. I fear that the considerations I have adduced thus far will not convince those proponents of taking only what is in the second box. Furthermore I suspect that an adequate solution to this problem will go much deeper than I have yet gone or shall go in this paper. So I want to pose one question. I assume that it is clear that in the vaccine example, the person should not be convinced by the probability argument, and should choose the dominant action. I assume also that it is clear that in the case of the two brothers, the brother should not be convinced by the probability argument offered. The question I should like to put to proponents of taking only what is in the second box in Newcomb’s example (and hence not performing the dominant action) is: what is the difference between Newcomb’s example and the other two examples which make the difference between not following the dominance principle, and following it?
CDT is a solution to Newcomb’s problem. It happens to be wrong, but it isn’t solving a completely separate problem. It’s going about solving Newcomb’s problem in the wrong way.
Edit: You’re just contradicting me without responding to any of my arguments. That doesn’t seem very reasonable, unless your aim is to never change your opinion no matter what.
I think people are finding phrases “CDT is solving a separate problem” and “CDT refuses to play this game and plays a different one” jarring. See my other response. Edit: people might also find your tone adversarial in a way that’s off-putting.
I think people are finding phrases “CDT is solving a separate problem” and “CDT refuses to play this game and plays a different one” jarring. See my other response. Edit: people might also find your tone adversarial in a way that’s off-putting.
Jarring, wrong and adversarial. Not a good combination.
I do disagree with 3, though I disagree (mostly connotatively) with 1 and 2 as well.
The arguments you refer to were not written at the time I wrote my previous response, so I’m not sure what your point in the “Edit” is.
Nevertheless, I’ll write my response to your argument now.
In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
You are correct when you say that CDT “doesn’t care” about Omega being right. But that doesn’t mean that CDT agents don’t know that Omega is going to be right. If you ask a CDT agent to predict how they will do in the game, they will predict that they will earn far less money than someone who one-boxes. There is no observable fact that a one-boxer and a two-boxer will disagree on (at least in this sense). The only disagreement the two will have is about the counterfactual statement “if you had made a different choice, that box would/would not have contained money”.
That counterfactual statement is something that different decision theories implicitly give different views on. Its truth or falsity is not in the problem; it’s part of the answer. CDT agents don’t rule out the theoretical possibility of a predictor who can accurately predict their actions. CDT just says that the counterfactual which one-boxers use is incorrect. This is wrong, but CDT is just giving a wrong answer to the same question.
So what you’re saying is that CDT refuses the whole setup and then proceeds to solve a completely different problem, correct?
Well, Nozick’s formulation in 1969, which popularized the problem in philosophy, went ahead and specified that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Which means smuggling in a theory of unidirectional causality into the very setup itself, which explains how it winds up called “Newcomb’s Paradox” instead of Newcomb’s Problem.
That is not a specification, it is a supposition. It is the same supposition CDT makes (rejection of backwards causality) and leads to the same result of not playing Newcomb.
It’s like playing chess and saying “dude, my rook can go diagonal, too!”
At that point, you’re not playing chess anymore.
No.
No, it’s just not aware that it could be running inside Omega’s head.
Another way of putting it is that CDT doesn’t model entities as modeling it.
What it is aware of is highly irrelevant.
Newcomb has a payoff matrix.
CDT refuses this payoff matrix and substitutes it with its own.
Therefore CDT solves a different problem.
Which of (1,2,3) do you disagree with?
CDT doesn’t change the payoffs. If it takes the single box and there is money in in, it still receives a million dollars; if it also takes the other box, it will receive 10,000 additional dollars. These are the standard payoffs to Newcomb’s Problem.
What you are assuming is that your decision affects Omega’s prediction. While it is nice that your intuition is so strong, CDTers disagree with this claim, as your decision has no causal impact on Omega’s prediction.
This formulation of Newcomb’s Problem may clarify the wrong intuition:
Suppose that the boxes are transparent, ie you can already see whether or not there’s a million dollars present. Suppose you see that there is a million dollars present; then you have no reason not to grab an additional 10,000- after all, per Givewell, that probably let’s you save ~8 lives. You wouldn’t want to randomly toss away 8 lives, would you? And the million is already present, you can see it with your own eyes, it’s there no matter what you do. If you take both boxes, the money won’t magically vanish- the payoff matrix is that, if the money’s there, you get it, end of story.
But, suppose there isn’t a million dollars there. Are you really going to walk away empty-handed, when you know, for sure, that you won’t be getting a million dollars? After all, the money’s already not there, Omega will not be paying a million dollars, no matter what happens. So your choice is between 0$ and 10,000$- again, are you really going to toss away those ~8 lives for nothing, no reason, no gain?
This is equivalent to the standard formulation: you may not be able to see the million, but it is already either present or not present.
You’re not talking about Newcomb. In Newcomb, you don’t get any “additional” $1000; these are the only dollars you get, because the $1000000 do magically vanish if you take the “additional” $1000.
The payoff matrix for Newcomb is as follows:
You take two boxes, you get $n>0.
You take one box, you get $m>n.
CDT, then, isn’t aware of the payoff matrix. It reasons as follows: Either Omega put money in boxes A and B, or only in box B. If Omega put money in both boxes, I’m better off taking both boxes. If Omega put money only in box B, I should also take both boxes instead of only box A. CDT doesn’t deal with the fact that which of these two games it’s playing depends on what it will choose to do in each case.
No, this is false. CDT is the one using the standard payoff matrix, and you are the one refusing to use the standard payoff matrix and substituting your own.
In particular: the money is either already there, or not already there. Once the game has begun, the Predictor is powerless to change things.
The standard payoff matrix for Newcomb is therefore as follows:
Omega predicts you take two boxes, you take two boxes, you get $n>0.
Omega predicts you take two boxes, you take two boxes, you get 0.
Omega predicts you take one box, you take one box, you get $m>n.
Omega predicts you take one box, you take two boxes, you get $m+n>m.
The problem becomes trivial if, as you are doing, you refuse to consider the second and fourth outcomes. However, you are then not playing Newcomb’s Problem.
No, only then am I playing Newcomb. What you’re playing is weak Newcomb, where you assign a probability of x>0 for Omega being wrong, at which point this becomes simple math where CDT will give you the correct result, whatever that may turn out to be.
No, you are assuming that your decision can change what’s in the box, which everybody agrees is wrong: the problem statement is that you cannot change what’s in the million-dollar box.
Also, what you describe as “weak Newcomb” is the standard formulation: Nozick’s original problem stated that the Predictor was “almost always” right. CDT still gives the wrong answer in simple Newcomb, as its decision cannot affect what’s in the box.
That’s not the “original problem”, that’s just the fleshed-out introduction to “Newcomb’s Problem and Two Principles of Choice” where he talks about aliens and other stuff that has about as much to do with Newcomb as prisoners have to do with the Prisoner’s Dilemma. Then after outlining some common intuitive answers, he goes on a mathematical tangent and later returns to the question of what one should do in Newcomb with this paragraph:
And yes, I think I can agree with him on this.
CDT is a solution to Newcomb’s problem. It happens to be wrong, but it isn’t solving a completely separate problem. It’s going about solving Newcomb’s problem in the wrong way.
I assume this means that you disagree with 3?
Edit: You’re just contradicting me without responding to any of my arguments. That doesn’t seem very reasonable, unless your aim is to never change your opinion no matter what.
I think people people may be confused by your word choice.
What are you referring to? I’d like to avoid confusion if possible.
I think people are finding phrases “CDT is solving a separate problem” and “CDT refuses to play this game and plays a different one” jarring. See my other response. Edit: people might also find your tone adversarial in a way that’s off-putting.
Jarring, wrong and adversarial. Not a good combination.
Yes, I saw your other reply, thank you for that.
I do disagree with 3, though I disagree (mostly connotatively) with 1 and 2 as well.
The arguments you refer to were not written at the time I wrote my previous response, so I’m not sure what your point in the “Edit” is.
Nevertheless, I’ll write my response to your argument now.
You are correct when you say that CDT “doesn’t care” about Omega being right. But that doesn’t mean that CDT agents don’t know that Omega is going to be right. If you ask a CDT agent to predict how they will do in the game, they will predict that they will earn far less money than someone who one-boxes. There is no observable fact that a one-boxer and a two-boxer will disagree on (at least in this sense). The only disagreement the two will have is about the counterfactual statement “if you had made a different choice, that box would/would not have contained money”.
That counterfactual statement is something that different decision theories implicitly give different views on. Its truth or falsity is not in the problem; it’s part of the answer. CDT agents don’t rule out the theoretical possibility of a predictor who can accurately predict their actions. CDT just says that the counterfactual which one-boxers use is incorrect. This is wrong, but CDT is just giving a wrong answer to the same question.