I suppose not fighting the hypothesis here must include ignoring the possibility that there are things in the universe that can’t be predicted by a mind localized in space and time. So An Omega can exist, it is possible to know everything about you including the position and momentum of every particle that makes up your body and your brain, and this gives enough information to predict all future values of position and momentum of every one of those particles.
I don’t think it can exist in our universe either, but if it could exist in some universe and I could exist in that universe also, would I one box or two box? All I can really say is I hope I would one box, it is not entirely up to the me-now in this universe. Whether the hypotetical then-and-there-me would one box or not, I am confident that if he did one box he’d get the million in that universe, and if he didn’t he wouldn’t. Unless we assume we have already made hypotheses about whether what we thing we would do in other universes is by hypothesis correct once we state it, in which case we wouldn’t want to fight that hypothesis either.
None of this seems to me to read on the question of how much effort should be devoted to making sure an AI in THIS universe would one box, which I thought was the original reason to bring up Necomb’s problem here. To answer THAT question, you WOULD have to concern yourself with whether this is a universe in which an honest Omega could exist.
But for the pure problem where we don’t get to give the sniff test to our hypotheses, you know what you must do.
I suppose not fighting the hypothesis here must include ignoring the possibility that there are things in the universe that can’t be predicted by a mind localized in space and time. So An Omega can exist, it is possible to know everything about you including the position and momentum of every particle that makes up your body and your brain, and this gives enough information to predict all future values of position and momentum of every one of those particles.
I don’t think it can exist in our universe either, but if it could exist in some universe and I could exist in that universe also, would I one box or two box? All I can really say is I hope I would one box, it is not entirely up to the me-now in this universe. Whether the hypotetical then-and-there-me would one box or not, I am confident that if he did one box he’d get the million in that universe, and if he didn’t he wouldn’t. Unless we assume we have already made hypotheses about whether what we thing we would do in other universes is by hypothesis correct once we state it, in which case we wouldn’t want to fight that hypothesis either.
None of this seems to me to read on the question of how much effort should be devoted to making sure an AI in THIS universe would one box, which I thought was the original reason to bring up Necomb’s problem here. To answer THAT question, you WOULD have to concern yourself with whether this is a universe in which an honest Omega could exist.
But for the pure problem where we don’t get to give the sniff test to our hypotheses, you know what you must do.