I don’t know any reason why a coin toss would be the best choice in Newcomb’s paradox. If you decide based on reason, and don’t decide to flip a coin, and Omega knows you well, he can predict your action above chance. The paradox stands.
Omega cannot know coin flip results without violating causality. So he either puts that million in the box or not. As a result, no matter which way he decides, Omega has 50% chance of violating own rules, which was supposedly impossible, breaking the problem.
What I mean is, if you change the scenario so he only has to predict above chance if you don’t flip a coin, and he isn’t always getting it right anyway, the same basic principle applies, but it doesn’t violate causality.
In Bayesian interpretation P() would be Omega’s subjective probability. In frequentist interpretation, the question doesn’t make any sense as you make a single boxing decision, not large number of tiny boxing decisions. Either way P() is very ill-defined.
No more so than other probabilities. Probabilities about future decisions of other actors aren’t disprivileged, that would be free will confusion. And are you seriously claiming that the probabilities of a coin flip don’t make sense in a frequentist interpretation? That was the context. In the general case it would be the long term relative frequency of possible versions of you similar enough to you to be indistinguishable for Omega deciding that way or something like that, if you insisted on using frequentist statistics for some reason.
You misunderstand frequentist interpretation—sample size is 1 - you either decide yes or decide no. To generalize from a single decider needs prior reference class (“toin cosses”), getting us into Bayesian subjective interpretations. Frequentists don’t have any concept of “probability of hypothesis” at all, only “probability of data given hypothesis” and the only way to connect them is using priors. “Frequency among possible worlds” is also a Bayesian thing that weirds frequentists out.
Anyway, if Omega has amazing prediction powers, and P() can be deterministically known by looking into the box this is far more valuable than mere $1,000,000! Let’s say I make my decision by randomly generating some string and checking if it’s a valid proof of Riemann hypothesis—if P() is non-zero, I made myself $1,000,000 anyway.
I understand that there’s an obvious technical problem if Omega rounds the number to whole dollars, but that’s just minor detail.
And actually, it is a lot worse in popular problem formulation of “if your decision relies on randomness, there will be no million” that tries to work around coin tossing. In such case a person randomly trying to prove false statement gets a million (as no proof could work, so his decision was reliable), and a person randomly trying to prove true statement gets $0 (as there’s non-zero chance of him randomly generating correct proof).
Another fun idea would be measuring both position and velocity of an electron—tossing a coin to decide either way, measuring one and getting the other from Omega.
The issue was whether the formulation makes sense, not whether it makes frequentialists freak out (and it’s not substantially different than e. g. drawing from an urn for the first time). In either case P() was the probablitity of an event, not a hypothesis.
In these sorts of problems you are supposed to assume that the dollar amounts match your actual utilities (as you observe your exploit doesn’t work anyway for tests with a probability of <0.5*10^-9 if rounding to cents, and you could just assume that you already have gained all knowledge you could gain through such test, or that Omega possesses exactly the same knowledge as you except for human psychology, or whatever).
I don’t know any reason why a coin toss would be the best choice in Newcomb’s paradox. If you decide based on reason, and don’t decide to flip a coin, and Omega knows you well, he can predict your action above chance. The paradox stands.
Omega cannot know coin flip results without violating causality. So he either puts that million in the box or not. As a result, no matter which way he decides, Omega has 50% chance of violating own rules, which was supposedly impossible, breaking the problem.
What I mean is, if you change the scenario so he only has to predict above chance if you don’t flip a coin, and he isn’t always getting it right anyway, the same basic principle applies, but it doesn’t violate causality.
The obvious extensions of the problem to cases with failable Omega are:
P( $1,000,000) = P(onebox)
Reward = $1,000,000 * P(onebox)
In Bayesian interpretation P() would be Omega’s subjective probability. In frequentist interpretation, the question doesn’t make any sense as you make a single boxing decision, not large number of tiny boxing decisions. Either way P() is very ill-defined.
No more so than other probabilities. Probabilities about future decisions of other actors aren’t disprivileged, that would be free will confusion. And are you seriously claiming that the probabilities of a coin flip don’t make sense in a frequentist interpretation? That was the context. In the general case it would be the long term relative frequency of possible versions of you similar enough to you to be indistinguishable for Omega deciding that way or something like that, if you insisted on using frequentist statistics for some reason.
(this comment assumes “Reward = $1,000,000 * P(onebox)”)
You misunderstand frequentist interpretation—sample size is 1 - you either decide yes or decide no. To generalize from a single decider needs prior reference class (“toin cosses”), getting us into Bayesian subjective interpretations. Frequentists don’t have any concept of “probability of hypothesis” at all, only “probability of data given hypothesis” and the only way to connect them is using priors. “Frequency among possible worlds” is also a Bayesian thing that weirds frequentists out.
Anyway, if Omega has amazing prediction powers, and P() can be deterministically known by looking into the box this is far more valuable than mere $1,000,000! Let’s say I make my decision by randomly generating some string and checking if it’s a valid proof of Riemann hypothesis—if P() is non-zero, I made myself $1,000,000 anyway.
I understand that there’s an obvious technical problem if Omega rounds the number to whole dollars, but that’s just minor detail.
And actually, it is a lot worse in popular problem formulation of “if your decision relies on randomness, there will be no million” that tries to work around coin tossing. In such case a person randomly trying to prove false statement gets a million (as no proof could work, so his decision was reliable), and a person randomly trying to prove true statement gets $0 (as there’s non-zero chance of him randomly generating correct proof).
Another fun idea would be measuring both position and velocity of an electron—tossing a coin to decide either way, measuring one and getting the other from Omega.
Possibilities are just endless.
The issue was whether the formulation makes sense, not whether it makes frequentialists freak out (and it’s not substantially different than e. g. drawing from an urn for the first time). In either case P() was the probablitity of an event, not a hypothesis.
In these sorts of problems you are supposed to assume that the dollar amounts match your actual utilities (as you observe your exploit doesn’t work anyway for tests with a probability of <0.5*10^-9 if rounding to cents, and you could just assume that you already have gained all knowledge you could gain through such test, or that Omega possesses exactly the same knowledge as you except for human psychology, or whatever).