Omega is just a superintelligence. Presumably, he can’t see the future and he’s not omniscient; so it’s hypothetically possible to trick him, to make him think you’ll one-box when in reality you’re going to two-box.
I’m not sure if I have the vocabulary yet to solve the problem of identity vs. action, and I study philosophy, not decision theory, so for me that’s a huge can of worms. (I’ve already had to prevent myself from connecting the attempted two-boxer distinction between ‘winning’ and ‘rational’ to Nietzsche’s idea of a Hinterwelt—but that’s totally something that could be done, by someone less averse to sounding pretentious.) But I think that, attempting to leave the can closed, the distinction I drew above between one-boxing and being a one-boxer really refers to the distinction between actually one-boxing when it comes time to open the box and making Omega think you’ll one-box—which may or may not be identical to making Omega think you’re the sort of person who will one-box.
And the problem I raised above is that nobody’s managed to trick him yet, so by simple induction, it’s not reasonable to bet a million dollars on your being able to succeed where everyone else failed. So maybe the superintelligence thing doesn’t even enter into it...? (Would it make a difference if it were just a human game show, that still displayed the same results? Would anyone one-box for Omega but two-box in the game show?)
Omega is just a superintelligence. Presumably, he can’t see the future and he’s not omniscient; so it’s hypothetically possible to trick him, to make him think you’ll one-box when in reality you’re going to two-box.
I’m not sure if I have the vocabulary yet to solve the problem of identity vs. action, and I study philosophy, not decision theory, so for me that’s a huge can of worms. (I’ve already had to prevent myself from connecting the attempted two-boxer distinction between ‘winning’ and ‘rational’ to Nietzsche’s idea of a Hinterwelt—but that’s totally something that could be done, by someone less averse to sounding pretentious.) But I think that, attempting to leave the can closed, the distinction I drew above between one-boxing and being a one-boxer really refers to the distinction between actually one-boxing when it comes time to open the box and making Omega think you’ll one-box—which may or may not be identical to making Omega think you’re the sort of person who will one-box.
And the problem I raised above is that nobody’s managed to trick him yet, so by simple induction, it’s not reasonable to bet a million dollars on your being able to succeed where everyone else failed. So maybe the superintelligence thing doesn’t even enter into it...? (Would it make a difference if it were just a human game show, that still displayed the same results? Would anyone one-box for Omega but two-box in the game show?)