In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.