On me having chicken for supper. Unless you can unpack “being conditional” to more than a bureaucratic hoop that’s easily jumped through, it’s of no use.
On reflection, my previous comment was off the mark. Knowing that Omega always predicts “two-box” is an obvious correlation between a property of agents and the quality of prediction. So, your correction basically states that the second view is the “natural” one: Omega always predicts correctly and then modifies the answer in 10% cases.
In such case, the “simulation uncertainty” argument should work the same way as in the “pure” Newcomb’s problem, with the correction for the 10% noise (which does not change the answer).
Oh, come on. According to Janes, the marginal probability P(Omega is correct | Omega predicts something) is supposed to be additionally conditioned on everything you know about the situation. If you know that Omega always predicts “two-box”, then P(Omega is correct | Omega predicts something) is equal to the relative frequency of two-boxers in the population. If you know that Omega first always predicts correctly and then modifies its answer in 10% cases, then it’s something completely different. If you have no knowledge about whether the first or the second is true, then what can you do? Presumably, try Solomonoff induction, too bad it’s incomputable.
On me having chicken for supper. Unless you can unpack “being conditional” to more than a bureaucratic hoop that’s easily jumped through, it’s of no use.
On reflection, my previous comment was off the mark. Knowing that Omega always predicts “two-box” is an obvious correlation between a property of agents and the quality of prediction. So, your correction basically states that the second view is the “natural” one: Omega always predicts correctly and then modifies the answer in 10% cases.
In such case, the “simulation uncertainty” argument should work the same way as in the “pure” Newcomb’s problem, with the correction for the 10% noise (which does not change the answer).
Oh, come on. According to Janes, the marginal probability P(Omega is correct | Omega predicts something) is supposed to be additionally conditioned on everything you know about the situation. If you know that Omega always predicts “two-box”, then P(Omega is correct | Omega predicts something) is equal to the relative frequency of two-boxers in the population. If you know that Omega first always predicts correctly and then modifies its answer in 10% cases, then it’s something completely different. If you have no knowledge about whether the first or the second is true, then what can you do? Presumably, try Solomonoff induction, too bad it’s incomputable.