This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the ‘same’ universe if the simulation is identical to the ‘real’ universe?
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?
But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of “pre-commitment.” Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
Newcomb’s problem doesn’t lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.
This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the ‘same’ universe if the simulation is identical to the ‘real’ universe?
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?
But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of “pre-commitment.” Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
Newcomb’s problem doesn’t lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.