If no facts about the nature of the “noise” is specified, then the phrase “probability of correct decision by Omega is 0.9″ does not make sense.
That is just what “probability” means: it quantifies possibilities that can’t be ruled out, where it’s not possible to distinguish those that do take place from those that don’t.
On me having chicken for supper. Unless you can unpack “being conditional” to more than a bureaucratic hoop that’s easily jumped through, it’s of no use.
On reflection, my previous comment was off the mark. Knowing that Omega always predicts “two-box” is an obvious correlation between a property of agents and the quality of prediction. So, your correction basically states that the second view is the “natural” one: Omega always predicts correctly and then modifies the answer in 10% cases.
In such case, the “simulation uncertainty” argument should work the same way as in the “pure” Newcomb’s problem, with the correction for the 10% noise (which does not change the answer).
Oh, come on. According to Janes, the marginal probability P(Omega is correct | Omega predicts something) is supposed to be additionally conditioned on everything you know about the situation. If you know that Omega always predicts “two-box”, then P(Omega is correct | Omega predicts something) is equal to the relative frequency of two-boxers in the population. If you know that Omega first always predicts correctly and then modifies its answer in 10% cases, then it’s something completely different. If you have no knowledge about whether the first or the second is true, then what can you do? Presumably, try Solomonoff induction, too bad it’s incomputable.
That is just what “probability” means: it quantifies possibilities that can’t be ruled out, where it’s not possible to distinguish those that do take place from those that don’t.
Bayesians say all probabilities are conditional. The question here is on what this “0.9” probability is conditioned.
On me having chicken for supper. Unless you can unpack “being conditional” to more than a bureaucratic hoop that’s easily jumped through, it’s of no use.
On reflection, my previous comment was off the mark. Knowing that Omega always predicts “two-box” is an obvious correlation between a property of agents and the quality of prediction. So, your correction basically states that the second view is the “natural” one: Omega always predicts correctly and then modifies the answer in 10% cases.
In such case, the “simulation uncertainty” argument should work the same way as in the “pure” Newcomb’s problem, with the correction for the 10% noise (which does not change the answer).
Oh, come on. According to Janes, the marginal probability P(Omega is correct | Omega predicts something) is supposed to be additionally conditioned on everything you know about the situation. If you know that Omega always predicts “two-box”, then P(Omega is correct | Omega predicts something) is equal to the relative frequency of two-boxers in the population. If you know that Omega first always predicts correctly and then modifies its answer in 10% cases, then it’s something completely different. If you have no knowledge about whether the first or the second is true, then what can you do? Presumably, try Solomonoff induction, too bad it’s incomputable.