It seems to me that the entire discussion is confused. Many people seem to be using the claim that Omega can’t predict your actions to make claims about what actions to take in the hypothetical world where it can. Accepting the assumption that Omega can predict your actions the problem seems to be a trivial calculation of expected utility:
If the opaque box contains b1 utility, the transparent one b2 utility, omega has e1 probability of falsly predicting you’ll one box and e2 probability of falsely predicting you’ll two box the expected utilities are
1 box: (1-e2)b1
2 box: e1b1 + b2
And you should 1 box unless b2 is bigger than (1 - e2 - e1)*b1.
It seems to me that the entire discussion is confused. Many people seem to be using the claim that Omega can’t predict your actions to make claims about what actions to take in the hypothetical world where it can. Accepting the assumption that Omega can predict your actions the problem seems to be a trivial calculation of expected utility:
If the opaque box contains b1 utility, the transparent one b2 utility, omega has e1 probability of falsly predicting you’ll one box and e2 probability of falsely predicting you’ll two box the expected utilities are
1 box: (1-e2)b1 2 box: e1b1 + b2
And you should 1 box unless b2 is bigger than (1 - e2 - e1)*b1.