Yes it does. It makes decision in the past that depends on your decision in the future, and your decision in the future can assume Omega has already decided in the past. That’s a causality loop.
It’s only a loop in imaginary Platonia. In the real world, laws of physics don’t notice that there’s a “loop”. One way to see the problem is as a situation that demonstrates failure to adequately account for the real world with the semantics usually employed to think about it.
If it’s a loop in Platonia, then all causation happens in Platonia. If any causation can be said to happen in the real world, then real causation is happening backwards in time in the Newcomb scenario.
But I, for one, have no problem with that. All causal processes observed so far have run in the same temporal direction. But there’s no reason to rule out a priori the possibility of exceptions.
I don’t see why Newcombe’s paradox breaks causality—it seems more accurate to say that both events are caused by an earlier cause: your predisposition
to choose a particular way. Both Omega’s prediction and your action are caused by this predisposition, meaning Omega’s prediction is merely correlated with, not a cause of, your choice.
It’s commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet’s firing causes the prisoner to die, but the finger’s pulling of the trigger causes both.) Newcomb’s scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe.
All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.
Newcomb’s scenario has the added wrinkle that event B also causes event A
I don’t see how. Omega doesn’t make the prediction because you made the action—he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn’t achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe’s paradox, and performing observation as to what such people actually do (so long as my decision criteria weren’t known to the person I was testing).
Am I violating causality by doing this? Clearly not—my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you’d decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it’s already in the original thought experiment if we assume literally 100% accuracy).
Assuming I’m understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we’d change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega’s prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.
He makes a prediction based on the nearby state of the universe that you model with an accuracy that approaches 1. If your mathematician can’t handle that then find a better mathematician.
I shall continue to find Omega useful.
ETA: The part of the Newcomb problem that is actually hard to explain is that I am somehow confident that Omega is being truthful.
At first I was going to vote this up to correct the seemingly unfair downvoting against you in this thread, but this particular comment seems both wrong and ill-explained. I’d prefer to have your reasons than your assurances.
Recall that Newcomb’s “paradox” has a payout (when Omega is always right) of $1000k for the 1-boxer, and $1k for the 2-boxer. But if Omega is correct with only p=.500001 then I should always 2-box.
I do agree that there is some 1>p>.5 where the idea of Omega having a belief of “what I will choose”, that’s correct with probability p, is just as troubling as if p=1.
By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into “Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them”.
From the linked Wikipedia article:
More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.
Some argue that Newcomb’s Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.
That’s basically it. It’s ill-defined, and any serious formalization collapses it into either “you choose first, so one box”, or “Omega chooses first, so two box” trivial problems.
Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.
Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb’s problem :-P
;) I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
If I was somewhat more patient I’d just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.
I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
Wow, they sure are right about that “power corrupts” thing ;-)
Free will is not imcompatible with Omega predicting your actions(Well, unless Omega delibrately manipulates your actions based on that predictive power, but it’s outside the scope of this paradox), and Omega doesn’t even need to see inside your head, just observe your behavior until it thinks it can predict your decisions with high accuracy only based on the input that you have received. Omega doesn’t need to be 100% accurate anyway, only do better than 99,9%. Determinism is also not required, for the same reason.(Though yeah, you could toss a quantum coin to make the odds 50-50, but this seems a bit uninteresting case. Omega could predict that you select either at 50% chance, and thus you lose on average about 750 000$ every time Omega makes this offer to you.)
So basically, where we disagree is that Omega can choose first and you’d still have to one-box, since Omega knows the factors that correlate with your decision at high probability. Without breaking causality, without messing with free will and even without requiring determinism.
“Omega chooses first, so two box” trivial problems.
Yes. Omega chooses first. That’s Newcomb’s. The other one isn’t.
It seems that the fact that both my decision and Omega’s decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as ‘ill-defined’ simply because your decision theory can’t handle it.
The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.
Just how much money does Omega need to put in the box before you are willing to discard ‘Serious’ and take the cash?
Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.
There’s a true problem if you require Omega to make a deterministic decision; it’s probably impossible to even postulate he’s right with some specific probability. Maybe that’s what you were getting at.
Yes it does. It makes decision in the past that depends on your decision in the future, and your decision in the future can assume Omega has already decided in the past. That’s a causality loop.
Newcomb is a completely bogus problem.
Is the taw-on-Newcomb downvoting happening because he’s speaking against what’s considered settled fact?
It’s only a loop in imaginary Platonia. In the real world, laws of physics don’t notice that there’s a “loop”. One way to see the problem is as a situation that demonstrates failure to adequately account for the real world with the semantics usually employed to think about it.
Too opaque.
Alas, yes. I’m working on that.
If it’s a loop in Platonia, then all causation happens in Platonia. If any causation can be said to happen in the real world, then real causation is happening backwards in time in the Newcomb scenario.
But I, for one, have no problem with that. All causal processes observed so far have run in the same temporal direction. But there’s no reason to rule out a priori the possibility of exceptions.
ETA: Nor to rule out loops.
I don’t see why Newcombe’s paradox breaks causality—it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega’s prediction and your action are caused by this predisposition, meaning Omega’s prediction is merely correlated with, not a cause of, your choice.
It’s commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet’s firing causes the prisoner to die, but the finger’s pulling of the trigger causes both.) Newcomb’s scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe.
All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.
I don’t see how. Omega doesn’t make the prediction because you made the action—he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn’t achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe’s paradox, and performing observation as to what such people actually do (so long as my decision criteria weren’t known to the person I was testing).
Am I violating causality by doing this? Clearly not—my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you’d decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it’s already in the original thought experiment if we assume literally 100% accuracy).
Assuming I’m understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we’d change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega’s prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.
He makes a prediction based on the nearby state of the universe that you model with an accuracy that approaches 1. If your mathematician can’t handle that then find a better mathematician.
I shall continue to find Omega useful.
ETA: The part of the Newcomb problem that is actually hard to explain is that I am somehow confident that Omega is being truthful.
Any accuracy better than random coin toss breaks causality. Prove it otherwise if you can, but many tried before you and all failed.
Billions of social encounters involving better than even predictions of similar choices every day make a mockery of this claim.
That is without even invoking a Jupiter brain executing Bayes rule.
At first I was going to vote this up to correct the seemingly unfair downvoting against you in this thread, but this particular comment seems both wrong and ill-explained. I’d prefer to have your reasons than your assurances.
Recall that Newcomb’s “paradox” has a payout (when Omega is always right) of $1000k for the 1-boxer, and $1k for the 2-boxer. But if Omega is correct with only p=.500001 then I should always 2-box.
I do agree that there is some 1>p>.5 where the idea of Omega having a belief of “what I will choose”, that’s correct with probability p, is just as troubling as if p=1.
By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into “Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them”.
From the linked Wikipedia article:
That’s basically it. It’s ill-defined, and any serious formalization collapses it into either “you choose first, so one box”, or “Omega chooses first, so two box” trivial problems.
The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.
Geez! When did flipping a (provably) fair coin when faced with a tough dilemma, start being the sole domain of jackasses?
Geez! When did questioning the evilness of flipping a fair coin when faced with a tough dilemma, start being a good reason to mod someone down? :-P
Don’t know. I was planning to just make a jibe at your exclusivity logic (some jackasses do therefore all who do...).
Make that two jibes. Perhaps the votes were actually a cringe response at the comma use. ;)
Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb’s problem :-P
;) I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
If I was somewhat more patient I’d just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.
Wow, they sure are right about that “power corrupts” thing ;-)
Power corrupts. Absolute power corrupts… comically?
Free will is not imcompatible with Omega predicting your actions(Well, unless Omega delibrately manipulates your actions based on that predictive power, but it’s outside the scope of this paradox), and Omega doesn’t even need to see inside your head, just observe your behavior until it thinks it can predict your decisions with high accuracy only based on the input that you have received. Omega doesn’t need to be 100% accurate anyway, only do better than 99,9%. Determinism is also not required, for the same reason.(Though yeah, you could toss a quantum coin to make the odds 50-50, but this seems a bit uninteresting case. Omega could predict that you select either at 50% chance, and thus you lose on average about 750 000$ every time Omega makes this offer to you.)
So basically, where we disagree is that Omega can choose first and you’d still have to one-box, since Omega knows the factors that correlate with your decision at high probability. Without breaking causality, without messing with free will and even without requiring determinism.
Yes. Omega chooses first. That’s Newcomb’s. The other one isn’t.
It seems that the fact that both my decision and Omega’s decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as ‘ill-defined’ simply because your decision theory can’t handle it.
The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.
Just how much money does Omega need to put in the box before you are willing to discard ‘Serious’ and take the cash?
Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.
There’s a true problem if you require Omega to make a deterministic decision; it’s probably impossible to even postulate he’s right with some specific probability. Maybe that’s what you were getting at.