You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.