Something has been bothering me about Newcomb’s problem, and I recently figured out what it is.
It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.
In real life, we have a strong expectation that the future can’t affect the past, but in the Newcomb problem we have pretty good evidence that it can.
Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega’s decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.
You probably know this, but just in case: In Newcomb’s problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn’t repeatedly observe backwards causality, even if they might be justified in thinking they did.
It seems very intuitive to me that being very good at predicting someone’s decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don’t think Newcomb’s problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.
This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the ‘same’ universe if the simulation is identical to the ‘real’ universe?
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?
But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of “pre-commitment.” Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
Newcomb’s problem doesn’t lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.
In real life, we have a strong expectation that the future can’t affect the past, but in the Newcomb problem we have pretty good evidence that it can.
In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.
Something has been bothering me about Newcomb’s problem, and I recently figured out what it is.
It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.
In real life, we have a strong expectation that the future can’t affect the past, but in the Newcomb problem we have pretty good evidence that it can.
Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega’s decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.
You probably know this, but just in case:
In Newcomb’s problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn’t repeatedly observe backwards causality, even if they might be justified in thinking they did.
How is that observably different from backwards causality existing? Perhaps we need to taboo the word “cause”.
It seems very intuitive to me that being very good at predicting someone’s decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don’t think Newcomb’s problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.
This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the ‘same’ universe if the simulation is identical to the ‘real’ universe?
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?
But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of “pre-commitment.” Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
Newcomb’s problem doesn’t lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.
In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.