They don’t require breaking causality. The argument works if Omega is barely predicting you above chance. I’m sure there are plenty of normal people who can do that just by talking to you.
There are also more important reasons. Take the doomsday argument. You can use the fact that you’re alive now to predict that we’ll die out “soon”. Suppose you had a choice between saving a life in a third-world country that likely wouldn’t amount to anything, or donating to SIAI to help in the distant future. You know it’s very unlikely for there to be a distant future. It’s like Omega did his coin toss, and if it comes up tails, we die out early and he asks you to waste the money by donating to SIAI. If it comes up heads, you’re in the future, and it’s better if you would have donated.
That’s not some thing that might happen. That’s a decision you have to make before you pick a charity to donate to. Lives are riding on this. That’s if the coin lands on tails. If it lands on heads, there is more life riding on it than has so far existed in the known universe. Please choose carefully.
The argument works if Omega is barely predicting you above chance.
Arguments like these remind me of students’ mistakes from Algorithms and Data Structures 101 - statements like that are very intuitive, absolutely wrong, and once you figure out why this reasoning doesn’t work it’s easy to forget that most people didn’t go through this ever.
What is required is Omega predicting better than chance in the worst case. Predicting correctly with ridiculously tiny chance of error against “average” person is worthless.
To avoid Omega and causality silliness, and just demonstrate this intuition—let’s take a slightly modified version of Boolean satisfiability—but instead of one formula we have three formulas of the same length. If all three are identical, return true or false depending on its satisfiability, if they’re different return true if number of one bits in problem is odd (or some other trivial property).
It is obviously NP-complete, as any satisfiability problem reduces to it by concatenating it three times. If we use exponential brute force to solve the hard case, average running time is O(n) for scanning the string plus O(2^(n/3)) for brute forcing but only 2^-(2n/3) of the time, that is O(1). So we can solve NP-complete problems in average linear time.
What happened? We were led astray by intuition, and assumed that problems that are difficult in worst case cannot be trivial on average. But this equal weighting is an artifact—if you tried reducing any other NP problem into this, you’d be getting very difficult ones nearly all the time, as if by magic.
Back to Omega—even if Omega predicts normal people very well, as long as there are any thinking being who is cannot predict—Omega must break causality. And such being are not just hypothetical—people who decide based on a coin toss are exactly like that. Silly rules about disallowing chance merely make counterexamples more complicated, Omega and Newcomb are still as much based on sloppy thinking as ever.
I don’t know any reason why a coin toss would be the best choice in Newcomb’s paradox. If you decide based on reason, and don’t decide to flip a coin, and Omega knows you well, he can predict your action above chance. The paradox stands.
Omega cannot know coin flip results without violating causality. So he either puts that million in the box or not. As a result, no matter which way he decides, Omega has 50% chance of violating own rules, which was supposedly impossible, breaking the problem.
What I mean is, if you change the scenario so he only has to predict above chance if you don’t flip a coin, and he isn’t always getting it right anyway, the same basic principle applies, but it doesn’t violate causality.
In Bayesian interpretation P() would be Omega’s subjective probability. In frequentist interpretation, the question doesn’t make any sense as you make a single boxing decision, not large number of tiny boxing decisions. Either way P() is very ill-defined.
No more so than other probabilities. Probabilities about future decisions of other actors aren’t disprivileged, that would be free will confusion. And are you seriously claiming that the probabilities of a coin flip don’t make sense in a frequentist interpretation? That was the context. In the general case it would be the long term relative frequency of possible versions of you similar enough to you to be indistinguishable for Omega deciding that way or something like that, if you insisted on using frequentist statistics for some reason.
You misunderstand frequentist interpretation—sample size is 1 - you either decide yes or decide no. To generalize from a single decider needs prior reference class (“toin cosses”), getting us into Bayesian subjective interpretations. Frequentists don’t have any concept of “probability of hypothesis” at all, only “probability of data given hypothesis” and the only way to connect them is using priors. “Frequency among possible worlds” is also a Bayesian thing that weirds frequentists out.
Anyway, if Omega has amazing prediction powers, and P() can be deterministically known by looking into the box this is far more valuable than mere $1,000,000! Let’s say I make my decision by randomly generating some string and checking if it’s a valid proof of Riemann hypothesis—if P() is non-zero, I made myself $1,000,000 anyway.
I understand that there’s an obvious technical problem if Omega rounds the number to whole dollars, but that’s just minor detail.
And actually, it is a lot worse in popular problem formulation of “if your decision relies on randomness, there will be no million” that tries to work around coin tossing. In such case a person randomly trying to prove false statement gets a million (as no proof could work, so his decision was reliable), and a person randomly trying to prove true statement gets $0 (as there’s non-zero chance of him randomly generating correct proof).
Another fun idea would be measuring both position and velocity of an electron—tossing a coin to decide either way, measuring one and getting the other from Omega.
The issue was whether the formulation makes sense, not whether it makes frequentialists freak out (and it’s not substantially different than e. g. drawing from an urn for the first time). In either case P() was the probablitity of an event, not a hypothesis.
In these sorts of problems you are supposed to assume that the dollar amounts match your actual utilities (as you observe your exploit doesn’t work anyway for tests with a probability of <0.5*10^-9 if rounding to cents, and you could just assume that you already have gained all knowledge you could gain through such test, or that Omega possesses exactly the same knowledge as you except for human psychology, or whatever).
They don’t require breaking causality. The argument works if Omega is barely predicting you above chance. I’m sure there are plenty of normal people who can do that just by talking to you.
There are also more important reasons. Take the doomsday argument. You can use the fact that you’re alive now to predict that we’ll die out “soon”. Suppose you had a choice between saving a life in a third-world country that likely wouldn’t amount to anything, or donating to SIAI to help in the distant future. You know it’s very unlikely for there to be a distant future. It’s like Omega did his coin toss, and if it comes up tails, we die out early and he asks you to waste the money by donating to SIAI. If it comes up heads, you’re in the future, and it’s better if you would have donated.
That’s not some thing that might happen. That’s a decision you have to make before you pick a charity to donate to. Lives are riding on this. That’s if the coin lands on tails. If it lands on heads, there is more life riding on it than has so far existed in the known universe. Please choose carefully.
Arguments like these remind me of students’ mistakes from Algorithms and Data Structures 101 - statements like that are very intuitive, absolutely wrong, and once you figure out why this reasoning doesn’t work it’s easy to forget that most people didn’t go through this ever.
What is required is Omega predicting better than chance in the worst case. Predicting correctly with ridiculously tiny chance of error against “average” person is worthless.
To avoid Omega and causality silliness, and just demonstrate this intuition—let’s take a slightly modified version of Boolean satisfiability—but instead of one formula we have three formulas of the same length. If all three are identical, return true or false depending on its satisfiability, if they’re different return true if number of one bits in problem is odd (or some other trivial property).
It is obviously NP-complete, as any satisfiability problem reduces to it by concatenating it three times. If we use exponential brute force to solve the hard case, average running time is O(n) for scanning the string plus O(2^(n/3)) for brute forcing but only 2^-(2n/3) of the time, that is O(1). So we can solve NP-complete problems in average linear time.
What happened? We were led astray by intuition, and assumed that problems that are difficult in worst case cannot be trivial on average. But this equal weighting is an artifact—if you tried reducing any other NP problem into this, you’d be getting very difficult ones nearly all the time, as if by magic.
Back to Omega—even if Omega predicts normal people very well, as long as there are any thinking being who is cannot predict—Omega must break causality. And such being are not just hypothetical—people who decide based on a coin toss are exactly like that. Silly rules about disallowing chance merely make counterexamples more complicated, Omega and Newcomb are still as much based on sloppy thinking as ever.
I don’t know any reason why a coin toss would be the best choice in Newcomb’s paradox. If you decide based on reason, and don’t decide to flip a coin, and Omega knows you well, he can predict your action above chance. The paradox stands.
Omega cannot know coin flip results without violating causality. So he either puts that million in the box or not. As a result, no matter which way he decides, Omega has 50% chance of violating own rules, which was supposedly impossible, breaking the problem.
What I mean is, if you change the scenario so he only has to predict above chance if you don’t flip a coin, and he isn’t always getting it right anyway, the same basic principle applies, but it doesn’t violate causality.
The obvious extensions of the problem to cases with failable Omega are:
P( $1,000,000) = P(onebox)
Reward = $1,000,000 * P(onebox)
In Bayesian interpretation P() would be Omega’s subjective probability. In frequentist interpretation, the question doesn’t make any sense as you make a single boxing decision, not large number of tiny boxing decisions. Either way P() is very ill-defined.
No more so than other probabilities. Probabilities about future decisions of other actors aren’t disprivileged, that would be free will confusion. And are you seriously claiming that the probabilities of a coin flip don’t make sense in a frequentist interpretation? That was the context. In the general case it would be the long term relative frequency of possible versions of you similar enough to you to be indistinguishable for Omega deciding that way or something like that, if you insisted on using frequentist statistics for some reason.
(this comment assumes “Reward = $1,000,000 * P(onebox)”)
You misunderstand frequentist interpretation—sample size is 1 - you either decide yes or decide no. To generalize from a single decider needs prior reference class (“toin cosses”), getting us into Bayesian subjective interpretations. Frequentists don’t have any concept of “probability of hypothesis” at all, only “probability of data given hypothesis” and the only way to connect them is using priors. “Frequency among possible worlds” is also a Bayesian thing that weirds frequentists out.
Anyway, if Omega has amazing prediction powers, and P() can be deterministically known by looking into the box this is far more valuable than mere $1,000,000! Let’s say I make my decision by randomly generating some string and checking if it’s a valid proof of Riemann hypothesis—if P() is non-zero, I made myself $1,000,000 anyway.
I understand that there’s an obvious technical problem if Omega rounds the number to whole dollars, but that’s just minor detail.
And actually, it is a lot worse in popular problem formulation of “if your decision relies on randomness, there will be no million” that tries to work around coin tossing. In such case a person randomly trying to prove false statement gets a million (as no proof could work, so his decision was reliable), and a person randomly trying to prove true statement gets $0 (as there’s non-zero chance of him randomly generating correct proof).
Another fun idea would be measuring both position and velocity of an electron—tossing a coin to decide either way, measuring one and getting the other from Omega.
Possibilities are just endless.
The issue was whether the formulation makes sense, not whether it makes frequentialists freak out (and it’s not substantially different than e. g. drawing from an urn for the first time). In either case P() was the probablitity of an event, not a hypothesis.
In these sorts of problems you are supposed to assume that the dollar amounts match your actual utilities (as you observe your exploit doesn’t work anyway for tests with a probability of <0.5*10^-9 if rounding to cents, and you could just assume that you already have gained all knowledge you could gain through such test, or that Omega possesses exactly the same knowledge as you except for human psychology, or whatever).