The premise is that Omega offers you the deal. If Omega’s predictions are always successful because it won’t offer the deal when it can’t predict the result, you can use me as Omega and I’d do as well as him—I just never offer the deal.
The (non-nitpicked version of the) transparent box case shows what’s wrong with the concept: Since your strategy might involve figuring out what Omega would have done, it may be in principle impossible for Omega to predict what you’re going to do, as Omega is indirectly trying to predict itself, leading to an undecideability paradox. The transparent boxes just make this simpler because you can “figure out” what Omega would have done by looking into the transparent boxes.
Of course, if you are not a perfect reasoner, it might be possible that Omega can always predict you, but then the question is no longer “which choice should I make”, it’s “which choice should I make within the limits of my imperfect reasoning”. And answering that requires formalizing exactly how your reasoning is limited, which is rather hard.
In the first case, Omega does not offer you the deal, and you receive $0, proving that it is possible to do worse than a two-boxer.
In the second case, you are placed into a superposition of taking one box and both boxes, receiving the appropriate reward in each.
In the third case, you are counted as ‘selecting’ both boxes, since it’s hard to convince Omega that grabbing a box doesn’t count as selecting it.
The premise is that Omega offers you the deal. If Omega’s predictions are always successful because it won’t offer the deal when it can’t predict the result, you can use me as Omega and I’d do as well as him—I just never offer the deal.
The (non-nitpicked version of the) transparent box case shows what’s wrong with the concept: Since your strategy might involve figuring out what Omega would have done, it may be in principle impossible for Omega to predict what you’re going to do, as Omega is indirectly trying to predict itself, leading to an undecideability paradox. The transparent boxes just make this simpler because you can “figure out” what Omega would have done by looking into the transparent boxes.
Of course, if you are not a perfect reasoner, it might be possible that Omega can always predict you, but then the question is no longer “which choice should I make”, it’s “which choice should I make within the limits of my imperfect reasoning”. And answering that requires formalizing exactly how your reasoning is limited, which is rather hard.