Okay, those with a two-boxing agent type don’t win but the two-boxer isn’t talking about agent types. They’re talking about decisions.
The problem doesn’t care whether you are the type of agent who talks about agent types or the type of agent who talks about decisions. The problem only cares about which actions you choose.
The problem only cares about which actions you choose.
The problem does care about what kind of agent you are, because that’s what determined Omega’s prediction. It’s just that kinds of agents are defined by what you (would) do in certain situations.
I don’t see how being a superintelligence would help. Even a superintelligence can’t do logically impossible things: you can’t be a one-boxer without one-boxing, because one-boxing is what constitutes being a one-boxer.
Omega is just a superintelligence. Presumably, he can’t see the future and he’s not omniscient; so it’s hypothetically possible to trick him, to make him think you’ll one-box when in reality you’re going to two-box.
I’m not sure if I have the vocabulary yet to solve the problem of identity vs. action, and I study philosophy, not decision theory, so for me that’s a huge can of worms. (I’ve already had to prevent myself from connecting the attempted two-boxer distinction between ‘winning’ and ‘rational’ to Nietzsche’s idea of a Hinterwelt—but that’s totally something that could be done, by someone less averse to sounding pretentious.) But I think that, attempting to leave the can closed, the distinction I drew above between one-boxing and being a one-boxer really refers to the distinction between actually one-boxing when it comes time to open the box and making Omega think you’ll one-box—which may or may not be identical to making Omega think you’re the sort of person who will one-box.
And the problem I raised above is that nobody’s managed to trick him yet, so by simple induction, it’s not reasonable to bet a million dollars on your being able to succeed where everyone else failed. So maybe the superintelligence thing doesn’t even enter into it...? (Would it make a difference if it were just a human game show, that still displayed the same results? Would anyone one-box for Omega but two-box in the game show?)
The problem doesn’t care whether you are the type of agent who talks about agent types or the type of agent who talks about decisions. The problem only cares about which actions you choose.
The problem does care about what kind of agent you are, because that’s what determined Omega’s prediction. It’s just that kinds of agents are defined by what you (would) do in certain situations.
Right. If you can be a one-boxer without one-boxing, that’s obviously what you do. Problem is, Omega is a superintelligence and you aren’t.
I don’t see how being a superintelligence would help. Even a superintelligence can’t do logically impossible things: you can’t be a one-boxer without one-boxing, because one-boxing is what constitutes being a one-boxer.
Omega is just a superintelligence. Presumably, he can’t see the future and he’s not omniscient; so it’s hypothetically possible to trick him, to make him think you’ll one-box when in reality you’re going to two-box.
I’m not sure if I have the vocabulary yet to solve the problem of identity vs. action, and I study philosophy, not decision theory, so for me that’s a huge can of worms. (I’ve already had to prevent myself from connecting the attempted two-boxer distinction between ‘winning’ and ‘rational’ to Nietzsche’s idea of a Hinterwelt—but that’s totally something that could be done, by someone less averse to sounding pretentious.) But I think that, attempting to leave the can closed, the distinction I drew above between one-boxing and being a one-boxer really refers to the distinction between actually one-boxing when it comes time to open the box and making Omega think you’ll one-box—which may or may not be identical to making Omega think you’re the sort of person who will one-box.
And the problem I raised above is that nobody’s managed to trick him yet, so by simple induction, it’s not reasonable to bet a million dollars on your being able to succeed where everyone else failed. So maybe the superintelligence thing doesn’t even enter into it...? (Would it make a difference if it were just a human game show, that still displayed the same results? Would anyone one-box for Omega but two-box in the game show?)