This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer)
P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4)
therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
Yes, I would.
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer) P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4) therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.