The issue is assigning probability to the outcome (Omega predicted player one-boxing whereas player two-boxed), as it is the only one where two-boxing wins.
No, because two-boxing also wins if Omega predicts you to two-box, and therefore always wins if your decision doesn’t alter Omega’s prediction of that very decision. CDT would two-box because n+1000 > n for both n = 0 and n = 1000000.
But, because Newcomb can’t exist, CDT can never choose anything in Newcomb.
Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn’t spend too much ink on this.
But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don’t want to be the one who one-boxes.
No, it doesn’t. Newcomb’s problem assumes that Omega has enough accuracy to make the expected value of one boxing greater than the expected value of two boxing. That is all that is required in order to give the problem the air of paradox.
Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.
Fallible Omega, but still backwards causality—CDT rejects this case, which cannot exist in reality.
Infallible Omega, no backwards causality—CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
Fallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
This will be my last comment on this thread. I’ve read Nozick. I’ve also read much of the current literature on Newcomb’s problem. While Omega is sometimes described as a perfect predictor, assuming that Omega is a perfect predictor is not required in order to get an apparently paradoxical result. The reason is that given no backwards causation (more on that below) and as long as Omega is good enough at predicting, CDT and EDT will recommend different decisions. But both approaches are derived from seemingly innocuous assumptions using good reasoning. And that feature—deriving a contradiction from apparently safe premisses through apparently safe reasoning -- is what makes something a paradox.
Partisans will argue for the correctness or incorrectness of one or the other of the two possible decisions in Newcomb’s problem. I have not given any argument. And I’m not going to give one here. For present purposes, I don’t care whether one-boxing or two-boxing is the correct decision. All I’m saying is what everyone who works on the problem agrees about. In Newcomb problems, CDT chooses two boxes and that choice has a lower expected value than taking one box. EDT chooses one box, which is strange on its face, since the decision now is presumed to have no causal relevance to the prediction. Yet, EDT recommends the choice with the greater expected value.
The usual story assumes that there is no backwards causation. That is why Nozick asks the reader (in the very passage you quoted, which you really should read more carefully) to: “Suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what [the predictor] did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made.” If we don’t follow Nozick in making this assumption—if we assume that there is backwards causation—CDT does not “reject the case” at all. If there is backwards causation and CDT has that as an input, then CDT will agree with EDT and recommend taking one box. The reason is that in the case of backwards causation, the decision now is causally relevant to the prediction in the past. That is precisely why Nozick ignores backwards causation, and he is utterly explicit about it in the first three sentences of the passage you quoted. So, there is good reason to consider only the case where you know (or believe) that there is no backwards causation because in that case, CDT and EDT paradoxically come apart.
But neither CDT nor EDT excludes any causal structure. CDT and EDT are possible decision theories in worlds with closed time-like curves. They’re possible decision theories in worlds that have physical laws that look nothing like our own physical laws. CDT and EDT are theories of decision, not theories of physics or metaphysics.
If we don’t follow Nozick in making this assumption—if we assume that there is backwards causation—CDT does not “reject the case” at all. If there is backwards causation and CDT has that as an input, then CDT will agree with EDT and recommend taking one box.
I consider CDT with “there is backwards causality” as an input something that isn’t CDT anymore; however I doubt disputing definitions is going to get us anywhere and it doesn’t seem to be the issue anyway.
The reason is that given no backwards causation (more on that below) and as long as Omega is good enough at predicting, CDT and EDT will recommend different decisions. But both approaches are derived from seemingly innocuous assumptions using good reasoning. And that feature—deriving a contradiction from apparently safe premisses through apparently safe reasoning—is what makes something a paradox.
The reason why a CDT agent two-boxes is because Omega makes its prediction based on the fact that the agent is a CDT agent, therefore no money will in the box. The reason why an EDT agent one-boxes is because Omega makes its prediction based on the fact that the agent is an EDT agent, therefore money will in the box. Both decisions are correct.*
This becomes a paradox only if your premise is that a CDT agent and an EDT agent are in the same situation, but if the decision theory of the agent is what Omega bases its prediction on, then they aren’t in the same situation.
(*If the EDT agent could two-box, then it should do that; however an EDT agent that has been predicted by Omega to one-box cannot choose to two-box.)
I don’t see a problem with the perfect predictor existing, I see the statement like “one can choose something other than what Omega predicted” as a contradiction in the problem’s framework. I suppose the trick is to have an imperfect predictor and see if it makes sense to take a limit (prediction accuracy → 100%).
No, because two-boxing also wins if Omega predicts you to two-box, and therefore always wins if your decision doesn’t alter Omega’s prediction of that very decision. CDT would two-box because n+1000 > n for both n = 0 and n = 1000000.
But, because Newcomb can’t exist, CDT can never choose anything in Newcomb.
Other than that, your post seems pretty accurate.
I’m still not at all sure what you mean when you say that Newcomb can’t exist. Could you say a bit more about what exactly you think cannot exist?
Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn’t spend too much ink on this.
But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don’t want to be the one who one-boxes.
No, it doesn’t. Newcomb’s problem assumes that Omega has enough accuracy to make the expected value of one boxing greater than the expected value of two boxing. That is all that is required in order to give the problem the air of paradox.
Read Nozick instead of making false statements.
There’s four types of Newcomb-like problems:
Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.
Fallible Omega, but still backwards causality—CDT rejects this case, which cannot exist in reality.
Infallible Omega, no backwards causality—CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
Fallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
That’s all there is to it.
This will be my last comment on this thread. I’ve read Nozick. I’ve also read much of the current literature on Newcomb’s problem. While Omega is sometimes described as a perfect predictor, assuming that Omega is a perfect predictor is not required in order to get an apparently paradoxical result. The reason is that given no backwards causation (more on that below) and as long as Omega is good enough at predicting, CDT and EDT will recommend different decisions. But both approaches are derived from seemingly innocuous assumptions using good reasoning. And that feature—deriving a contradiction from apparently safe premisses through apparently safe reasoning -- is what makes something a paradox.
Partisans will argue for the correctness or incorrectness of one or the other of the two possible decisions in Newcomb’s problem. I have not given any argument. And I’m not going to give one here. For present purposes, I don’t care whether one-boxing or two-boxing is the correct decision. All I’m saying is what everyone who works on the problem agrees about. In Newcomb problems, CDT chooses two boxes and that choice has a lower expected value than taking one box. EDT chooses one box, which is strange on its face, since the decision now is presumed to have no causal relevance to the prediction. Yet, EDT recommends the choice with the greater expected value.
The usual story assumes that there is no backwards causation. That is why Nozick asks the reader (in the very passage you quoted, which you really should read more carefully) to: “Suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what [the predictor] did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made.” If we don’t follow Nozick in making this assumption—if we assume that there is backwards causation—CDT does not “reject the case” at all. If there is backwards causation and CDT has that as an input, then CDT will agree with EDT and recommend taking one box. The reason is that in the case of backwards causation, the decision now is causally relevant to the prediction in the past. That is precisely why Nozick ignores backwards causation, and he is utterly explicit about it in the first three sentences of the passage you quoted. So, there is good reason to consider only the case where you know (or believe) that there is no backwards causation because in that case, CDT and EDT paradoxically come apart.
But neither CDT nor EDT excludes any causal structure. CDT and EDT are possible decision theories in worlds with closed time-like curves. They’re possible decision theories in worlds that have physical laws that look nothing like our own physical laws. CDT and EDT are theories of decision, not theories of physics or metaphysics.
I consider CDT with “there is backwards causality” as an input something that isn’t CDT anymore; however I doubt disputing definitions is going to get us anywhere and it doesn’t seem to be the issue anyway.
The reason why a CDT agent two-boxes is because Omega makes its prediction based on the fact that the agent is a CDT agent, therefore no money will in the box. The reason why an EDT agent one-boxes is because Omega makes its prediction based on the fact that the agent is an EDT agent, therefore money will in the box. Both decisions are correct.*
This becomes a paradox only if your premise is that a CDT agent and an EDT agent are in the same situation, but if the decision theory of the agent is what Omega bases its prediction on, then they aren’t in the same situation.
(*If the EDT agent could two-box, then it should do that; however an EDT agent that has been predicted by Omega to one-box cannot choose to two-box.)
I don’t see a problem with the perfect predictor existing, I see the statement like “one can choose something other than what Omega predicted” as a contradiction in the problem’s framework. I suppose the trick is to have an imperfect predictor and see if it makes sense to take a limit (prediction accuracy → 100%).
It’s not a matter of accuracy, it’s a matter of considering backwards causality or not. Please read this post of mine.