Your decision does not bootstrap itself out of nothing; it is a function of F. All causality here is forwards in time. By the definition of Omega, OP and YD always match, and the causality chain is self-consistent, for a single timeline. Most confusion that I have seen around Omega or Newcomb seems to be confusion about at least one of these things.
The catch is that Omega isn’t going to show up if it predicts you aren’t going to pay. If it showed up, than it must have predicted you are going to pay.
I think this is the same self-referential problem Mr. Hen calls out in this comment.
I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega.
I can’t tell whether this makes Omega logically impossible or not. Anyone?
This doesn’t make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
I don’t think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not.
I’ll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.
F = Factors that feed into your decision process.
OP = Omega’s prediction.
YD = Your decision.
F --> OP
F --> YD
Your decision does not bootstrap itself out of nothing; it is a function of F. All causality here is forwards in time. By the definition of Omega, OP and YD always match, and the causality chain is self-consistent, for a single timeline. Most confusion that I have seen around Omega or Newcomb seems to be confusion about at least one of these things.
Yeah, I agree with that.
The catch is that Omega isn’t going to show up if it predicts you aren’t going to pay. If it showed up, than it must have predicted you are going to pay.
Ooops, as soon as Omega tells you his prediction the above has to change because now there is a new element in F.
I think this is the same self-referential problem Mr. Hen calls out in this comment.
I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega.
I can’t tell whether this makes Omega logically impossible or not. Anyone?
This doesn’t make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
I don’t think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not.
I’ll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.
I would assume that Omega telling you his prediction was already factored into the Omega Prediction F.