Well, for instance, he cannot make 1+1=3. And, if one defines rationality as actually winning then he cannot act in such a way that rational people lose. This is perfectly obvious; and, in case you have misunderstood what I wrote (as it looks like you have), that is the only thing I said that Omega cannot do.
In the discussion of strategy S, my claim was not about what Omega can do but about what you (a person attempting to implement such a strategy) can consistently include in your model of the universe. If you are an S-rational agent, then Omega may decide to screw you over, in which case you lose; that’s OK (as far as the notion of rationality goes; it’s too bad for you) because S doesn’t purport to guarantee that you don’t lose.
What S does purport to do is to arrange that, in so far as the universe obeys your (incomplete, probabilistic, …) model of it, you win on average. Omega’s malfeasance is only a problem for this if it’s included in your model. Which it can’t be. Hence:
what your example shows [...] is that you can’t consistently expect Omega to act in a way that falsifies your beliefs and/or invalidates your strategies for acting.
(Actually, I think that’s not quite right. You could probably consistently expect that, provided your expectations about how he’s going to to it were vague enough.)
I did not claim, nor do I believe, that a regular person can compute a perfectly rational strategy in the sense I described. Nor do I believe that a regular person can play chess without making any mistakes. None the less, there is such a thing as playing chess well; and there is such a thing as being (imperfectly, but better than one might be) rational. Even with a definition of the sort Eliezer likes.
Well, for instance, he cannot make 1+1=3. And, if one defines rationality as actually winning then he cannot act in such a way that rational people lose. This is perfectly obvious; and, in case you have misunderstood what I wrote (as it looks like you have), that is the only thing I said that Omega cannot do.
In the discussion of strategy S, my claim was not about what Omega can do but about what you (a person attempting to implement such a strategy) can consistently include in your model of the universe. If you are an S-rational agent, then Omega may decide to screw you over, in which case you lose; that’s OK (as far as the notion of rationality goes; it’s too bad for you) because S doesn’t purport to guarantee that you don’t lose.
What S does purport to do is to arrange that, in so far as the universe obeys your (incomplete, probabilistic, …) model of it, you win on average. Omega’s malfeasance is only a problem for this if it’s included in your model. Which it can’t be. Hence:
(Actually, I think that’s not quite right. You could probably consistently expect that, provided your expectations about how he’s going to to it were vague enough.)
I did not claim, nor do I believe, that a regular person can compute a perfectly rational strategy in the sense I described. Nor do I believe that a regular person can play chess without making any mistakes. None the less, there is such a thing as playing chess well; and there is such a thing as being (imperfectly, but better than one might be) rational. Even with a definition of the sort Eliezer likes.