If precommitment is observable and unchangeable, then order of action is:
Joe: precommit or not
Kate: accept or not—knowing if Joe precommitted or not
Joe: breakup (assuming no precommitment)
If precommitment is not observable and/or changeable, then it can be rearranged, and we have:
Kate: accept or not—not having any clue what Joe did
Joe: breakup or not
Or in the most complex situation, with 3 probabilistic nodes:
Joe: precommit or not
Nature: Kate figures out what Joe did correctly or not
Kate: accept or not
Nature: Marriage happy or unhappy
Nature: Joe changes mind or not
Joe: breakup or not
None of these is remotely Newcombish. You only get Newcomb paradox when you assume causal loop, and try to solve the problem using tools devised for situations without causal loops.
None of these is remotely Newcombish. You only get Newcomb paradox when you assume causal loop, and try to solve the problem using tools devised for situations without causal loops.
It is the Newcomb Problem. It may be tricky and counter-intuitive but it isn’t a paradox. More importantly The Newcomb Problem does not rely on a causal loop. Some form of reliable prediction is necessary but that does not imply a causal loop.
This isn’t the case when you predict that a calculator on which you input 2+2 will output 4. Why would it necessarily be the case for predicting a person?
If Joe believes that his precommitment is inviolable, or even that it affects the probability of him breaking up later, then it appears to him that he is confronted with a causal loop. His decision-making program, at that moment, addresses Newcomb’s problem, even if it’s wrong in believing in the causal loop.
But I think this only proves that flawed reasoners may face Newcomb’s problem. (It might even turn out that finding yourself facing Newcomb’s problem proves your reasoning is flawed.)
If you care about “causal reasoning”, the other half of what’s supposed to make Newcomb confusing, then Joe’s problem is more like Kavka’s (so this post accidentally shows how Kavka and Newcomb are similar). But the distinction is instrumentally irrelevant: the point is that he can benefit from decision mechanisms that are evidential and time-invariant, and you don’t need “unreasonable certainties” or “paradoxes of causality” for this to come up.
There is no need for time-invariance. The most generic model (2 Joe nodes; 1 Kate note; 3 Nature nodes) of vanilla decision theory perfectly explains the situation you’re talking about—unless you postulate some causal loops.
And in Kavka’s problem there’s no paradox unless we assume causal loops (billionaire knows now if you’re going to decide to drink the toxin or not tomorrow), or leave the problem ambiguous (so can you change or mind or not?).
If precommitment is observable and unchangeable, then order of action is:
Joe: precommit or not
Kate: accept or not—knowing if Joe precommitted or not
Joe: breakup (assuming no precommitment)
If precommitment is not observable and/or changeable, then it can be rearranged, and we have:
Kate: accept or not—not having any clue what Joe did
Joe: breakup or not
Or in the most complex situation, with 3 probabilistic nodes:
Joe: precommit or not
Nature: Kate figures out what Joe did correctly or not
Kate: accept or not
Nature: Marriage happy or unhappy
Nature: Joe changes mind or not
Joe: breakup or not
None of these is remotely Newcombish. You only get Newcomb paradox when you assume causal loop, and try to solve the problem using tools devised for situations without causal loops.
It is the Newcomb Problem. It may be tricky and counter-intuitive but it isn’t a paradox. More importantly The Newcomb Problem does not rely on a causal loop. Some form of reliable prediction is necessary but that does not imply a causal loop.
reliable prediction = causal loop
This isn’t the case when you predict that a calculator on which you input 2+2 will output 4. Why would it necessarily be the case for predicting a person?
Why was this voted down so hard? Please explain. It sounds reasonable to me.
If Joe believes that his precommitment is inviolable, or even that it affects the probability of him breaking up later, then it appears to him that he is confronted with a causal loop. His decision-making program, at that moment, addresses Newcomb’s problem, even if it’s wrong in believing in the causal loop.
But I think this only proves that flawed reasoners may face Newcomb’s problem. (It might even turn out that finding yourself facing Newcomb’s problem proves your reasoning is flawed.)
It’s still interesting enough to up-vote.
My pre-sponse to this is in footnote 2:
There is no need for time-invariance. The most generic model (2 Joe nodes; 1 Kate note; 3 Nature nodes) of vanilla decision theory perfectly explains the situation you’re talking about—unless you postulate some causal loops.
Is that not the simplicity you’re interested in?
And in Kavka’s problem there’s no paradox unless we assume causal loops (billionaire knows now if you’re going to decide to drink the toxin or not tomorrow), or leave the problem ambiguous (so can you change or mind or not?).
You’ll notice I didn’t once use the word “paradox” ;)