I don’t see how this is relevant, but yes, in principle it’s impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.
This has pretty much no impact on actual newcomb’s though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans (“trolls get no money”). For the purpose of the thought experiment, action-conditional predictions are fine.
IOW, this is not a problem with Newcomb’s. By the way, this has been discussed previously.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
I don’t see how this is relevant, but yes, in principle it’s impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.
This has pretty much no impact on actual newcomb’s though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans (“trolls get no money”). For the purpose of the thought experiment, action-conditional predictions are fine.
IOW, this is not a problem with Newcomb’s. By the way, this has been discussed previously.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.