Newcomb’s problem is not particularly interesting if one assumes the mechanism is time travel. If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel). no decision theory can do well. The fact that Eliezar’s proposed decision theory is called “timeless” doesn’t actually mean anything—and it hasn’t really been formalized anyway.
In short, try thinking about the problem with time travel excluded. What insights there are to gain from the problem are most accessible from that perspective.
If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel), no decision theory can do well.
This statement is clearly false. Any decision theory that gives time-travelling Omega enough incentive to believe that you will one-box will do well. I don’t think this is possible without actually one-boxing, though.
You can substitute “timeless” with “considering violation of causality, for example time travel”. “Timeless” is just shorter.
In short, try thinking about the problem with time travel excluded.
Without time travel, this problem either ceases to exist, or becomes simple calculus.
No; Timeless Decision Theory does not violate causality. It is not a physical theory, which postulates new timetravelling particles or whatever; almost all of its advocates believe in full determinism, in fact. (Counterfactual mugging is an equivalent problem.)
Newcomb’s Problem has never included time travel. Every standard issue was created for the standard, non-time travel version. In particular, if one allows for backward causation (ie for one’s decision to causally affect what’s in the box) then the problem becomes trivial.
No; Timeless Decision Theory does not violate causality.
I didn’t say (or mean) that it violated causality. I meant it assigned a probability p>0 to violation of causality being possible. I may be wrong on this, since I only read enough about TDT to infer that it isn’t interesting or relevant to me.
Newcomb’s Problem has never included time travel.
Actual Newcomb includes an omniscient being, and omniscience is impossible without time travel / violation of causality.
If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.
I meant it assigned a probability p>0 to violation of causality being possible.
It intrinsically says nothing about causality violation. All zero is not a probability and lack of infinite certainty issues are independent of the decision theory. The decision theory just works with whatever your map contains.
Actual Newcomb doesn’t include an omniscient being; I quote from Wikipedia:
However, the original discussion by Nozick says only that the Predictor’s predictions are “almost certainly” correct, and also specifies that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Except that this is false, so nevermind.
Also, actual knowledge of everything aside from the Predictor is possible without time travel. It’s impossible in practice, but this is a thought experiment. You “just” need to specify the starting position of the system, and the laws operating on it.
Well, the German Wikipedia says something entirely different, so may I suggest you actually read Nozick? I have posted a paragraph from the paper in question here.
Translation from German Wiki: “An omniscient being...”
What does this tell us? Exactly, that we shouldn’t use Wikipedia as a source.
If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.
Omega makes its prediction purely based on the past (and present).
That being the case which decision would you say is trivially correct? Based on what you have said so far I can’t predict which way your decision would go.
Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake.
No, I wouldn’t rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I’m not willing to do.
Of course if I’m allowed to communicate with Omega, I would try to convince it that I’ll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that.
However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don’t think this is in any way relevant to the theoretical problem, though.
Newcomb’s problem is not particularly interesting if one assumes the mechanism is time travel. If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel). no decision theory can do well. The fact that Eliezar’s proposed decision theory is called “timeless” doesn’t actually mean anything—and it hasn’t really been formalized anyway.
In short, try thinking about the problem with time travel excluded. What insights there are to gain from the problem are most accessible from that perspective.
This statement is clearly false. Any decision theory that gives time-travelling Omega enough incentive to believe that you will one-box will do well. I don’t think this is possible without actually one-boxing, though.
You can substitute “timeless” with “considering violation of causality, for example time travel”. “Timeless” is just shorter.
Without time travel, this problem either ceases to exist, or becomes simple calculus.
No; Timeless Decision Theory does not violate causality. It is not a physical theory, which postulates new timetravelling particles or whatever; almost all of its advocates believe in full determinism, in fact. (Counterfactual mugging is an equivalent problem.)
Newcomb’s Problem has never included time travel. Every standard issue was created for the standard, non-time travel version. In particular, if one allows for backward causation (ie for one’s decision to causally affect what’s in the box) then the problem becomes trivial.
I didn’t say (or mean) that it violated causality. I meant it assigned a probability p>0 to violation of causality being possible. I may be wrong on this, since I only read enough about TDT to infer that it isn’t interesting or relevant to me.
Actual Newcomb includes an omniscient being, and omniscience is impossible without time travel / violation of causality.
If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.
It intrinsically says nothing about causality violation. All zero is not a probability and lack of infinite certainty issues are independent of the decision theory. The decision theory just works with whatever your map contains.
Actual Newcomb doesn’t include an omniscient being; I quote from Wikipedia:
Except that this is false, so nevermind.
Also, actual knowledge of everything aside from the Predictor is possible without time travel. It’s impossible in practice, but this is a thought experiment. You “just” need to specify the starting position of the system, and the laws operating on it.
Well, the German Wikipedia says something entirely different, so may I suggest you actually read Nozick? I have posted a paragraph from the paper in question here.
Translation from German Wiki: “An omniscient being...”
What does this tell us? Exactly, that we shouldn’t use Wikipedia as a source.
Oops, my apologies.
Omega makes its prediction purely based on the past (and present).
That being the case which decision would you say is trivially correct? Based on what you have said so far I can’t predict which way your decision would go.
Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake.
No, I wouldn’t rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I’m not willing to do.
Of course if I’m allowed to communicate with Omega, I would try to convince it that I’ll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that.
However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don’t think this is in any way relevant to the theoretical problem, though.