At first I was going to vote this up to correct the seemingly unfair downvoting against you in this thread, but this particular comment seems both wrong and ill-explained. I’d prefer to have your reasons than your assurances.
Recall that Newcomb’s “paradox” has a payout (when Omega is always right) of $1000k for the 1-boxer, and $1k for the 2-boxer. But if Omega is correct with only p=.500001 then I should always 2-box.
I do agree that there is some 1>p>.5 where the idea of Omega having a belief of “what I will choose”, that’s correct with probability p, is just as troubling as if p=1.
By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into “Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them”.
From the linked Wikipedia article:
More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.
Some argue that Newcomb’s Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.
That’s basically it. It’s ill-defined, and any serious formalization collapses it into either “you choose first, so one box”, or “Omega chooses first, so two box” trivial problems.
Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.
Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb’s problem :-P
;) I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
If I was somewhat more patient I’d just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.
I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
Wow, they sure are right about that “power corrupts” thing ;-)
Free will is not imcompatible with Omega predicting your actions(Well, unless Omega delibrately manipulates your actions based on that predictive power, but it’s outside the scope of this paradox), and Omega doesn’t even need to see inside your head, just observe your behavior until it thinks it can predict your decisions with high accuracy only based on the input that you have received. Omega doesn’t need to be 100% accurate anyway, only do better than 99,9%. Determinism is also not required, for the same reason.(Though yeah, you could toss a quantum coin to make the odds 50-50, but this seems a bit uninteresting case. Omega could predict that you select either at 50% chance, and thus you lose on average about 750 000$ every time Omega makes this offer to you.)
So basically, where we disagree is that Omega can choose first and you’d still have to one-box, since Omega knows the factors that correlate with your decision at high probability. Without breaking causality, without messing with free will and even without requiring determinism.
“Omega chooses first, so two box” trivial problems.
Yes. Omega chooses first. That’s Newcomb’s. The other one isn’t.
It seems that the fact that both my decision and Omega’s decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as ‘ill-defined’ simply because your decision theory can’t handle it.
The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.
Just how much money does Omega need to put in the box before you are willing to discard ‘Serious’ and take the cash?
Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.
There’s a true problem if you require Omega to make a deterministic decision; it’s probably impossible to even postulate he’s right with some specific probability. Maybe that’s what you were getting at.
Any accuracy better than random coin toss breaks causality. Prove it otherwise if you can, but many tried before you and all failed.
Billions of social encounters involving better than even predictions of similar choices every day make a mockery of this claim.
That is without even invoking a Jupiter brain executing Bayes rule.
At first I was going to vote this up to correct the seemingly unfair downvoting against you in this thread, but this particular comment seems both wrong and ill-explained. I’d prefer to have your reasons than your assurances.
Recall that Newcomb’s “paradox” has a payout (when Omega is always right) of $1000k for the 1-boxer, and $1k for the 2-boxer. But if Omega is correct with only p=.500001 then I should always 2-box.
I do agree that there is some 1>p>.5 where the idea of Omega having a belief of “what I will choose”, that’s correct with probability p, is just as troubling as if p=1.
By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant—what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into “Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them”.
From the linked Wikipedia article:
That’s basically it. It’s ill-defined, and any serious formalization collapses it into either “you choose first, so one box”, or “Omega chooses first, so two box” trivial problems.
The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.
Geez! When did flipping a (provably) fair coin when faced with a tough dilemma, start being the sole domain of jackasses?
Geez! When did questioning the evilness of flipping a fair coin when faced with a tough dilemma, start being a good reason to mod someone down? :-P
Don’t know. I was planning to just make a jibe at your exclusivity logic (some jackasses do therefore all who do...).
Make that two jibes. Perhaps the votes were actually a cringe response at the comma use. ;)
Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb’s problem :-P
;) I’d make for a rather harsh Omega. If I was dropping my demi-divine goodies around I’d make it quite clear that if I predicted a randomization I’d booby trap the big box with a custard pie jack-in-a-box trap.
If I was somewhat more patient I’d just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.
Wow, they sure are right about that “power corrupts” thing ;-)
Power corrupts. Absolute power corrupts… comically?
Free will is not imcompatible with Omega predicting your actions(Well, unless Omega delibrately manipulates your actions based on that predictive power, but it’s outside the scope of this paradox), and Omega doesn’t even need to see inside your head, just observe your behavior until it thinks it can predict your decisions with high accuracy only based on the input that you have received. Omega doesn’t need to be 100% accurate anyway, only do better than 99,9%. Determinism is also not required, for the same reason.(Though yeah, you could toss a quantum coin to make the odds 50-50, but this seems a bit uninteresting case. Omega could predict that you select either at 50% chance, and thus you lose on average about 750 000$ every time Omega makes this offer to you.)
So basically, where we disagree is that Omega can choose first and you’d still have to one-box, since Omega knows the factors that correlate with your decision at high probability. Without breaking causality, without messing with free will and even without requiring determinism.
Yes. Omega chooses first. That’s Newcomb’s. The other one isn’t.
It seems that the fact that both my decision and Omega’s decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as ‘ill-defined’ simply because your decision theory can’t handle it.
The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.
Just how much money does Omega need to put in the box before you are willing to discard ‘Serious’ and take the cash?
Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.
There’s a true problem if you require Omega to make a deterministic decision; it’s probably impossible to even postulate he’s right with some specific probability. Maybe that’s what you were getting at.