Imagine a variant where both boxes are transparent and you can see what’s inside, but the contents of the boxes were still determined by Omega’s prediction of your future decision. (I think this formulation is due to Gary Drescher.) I’m a one-boxer in that variant too, how about you?
Of course you are, Omega says so! Short of being infinitely confident in Omega’s abilities (and my understanding of them), I’d reach for both boxes. Are you predicting I will see the million will dissolve into thin air?
Are you trying to pre-commit in order to encourage potential Omegas to hand over the money? Is there more extensive discussion of this variant?
No, the problem definition is that if Omega predicts that if you see both boxes full you will take just Box B, then you will see both boxes full, otherwise you will see just Box A full.
Yes, there is a more extensive discussion, in Good and Real. Basically, you act for the sake of what would be the case if you had acted that way.
In this case, there is actually a plausible causal reason to one-box: you could be the instance that Omega is simulating in order to make its prediction. But even if not, there are all sorts of cases where we act for the sake of what would be the case if we did. Good and Real discusses this extensively.
Of course you are, Omega says so! Short of being infinitely confident in Omega’s abilities (and my understanding of them), I’d reach for both boxes. Are you predicting I will see the million will dissolve into thin air?
Are you trying to pre-commit in order to encourage potential Omegas to hand over the money? Is there more extensive discussion of this variant?
No. Based on your comment, I’m predicting you won’t see the million in the first place.
Isn’t the problem definition that I see both boxes full?
No, the problem definition is that if Omega predicts that if you see both boxes full you will take just Box B, then you will see both boxes full, otherwise you will see just Box A full.
Yes, there is a more extensive discussion, in Good and Real. Basically, you act for the sake of what would be the case if you had acted that way.
In this case, there is actually a plausible causal reason to one-box: you could be the instance that Omega is simulating in order to make its prediction. But even if not, there are all sorts of cases where we act for the sake of what would be the case if we did. Good and Real discusses this extensively.