I think this is the same self-referential problem Mr. Hen calls out in this comment.
I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega.
I can’t tell whether this makes Omega logically impossible or not. Anyone?
This doesn’t make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
I don’t think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not.
I’ll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.
I think this is the same self-referential problem Mr. Hen calls out in this comment.
I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega.
I can’t tell whether this makes Omega logically impossible or not. Anyone?
This doesn’t make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
I don’t think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not.
I’ll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.