I don’t see why you think this would apply to Newcomb. Omega is not an “other person”; it has no motivation, no payoff matrix.
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.