Omega in Newcomb’s problem is doing something that plausibly is very general
This seems to be the claim under dispute, and the question of fairness should be distinguished from the claim that Omega is doing something realistic or unrealistic. I think we agree that Newcomb-like situations are practically possible. But it may be that my unfair game is practically possible too, and that in principle no decision theory can come out maximizing utility in every practically possible game.
One response might be to say Newcomb’s problem is more unfair than the problem of simply choosing between two boxes containing different amounts of money, because Newcomb’s distribution of utility makes mention of the decision. Newcomb’s is unfair because it goes meta on the decider. My TDT punishing game is much more unfair than Newcomb’s because it goes one ‘meta’ level up from there, making mention of the decision theories.
You could argue that even if no decision theory can maximise in every arbitrarily unfair game, there are degrees of unfairness related to the degree to which the problem ‘goes meta’. We should just prefer the decision theory that can maximise the at the highest level of unfairness. This could probably be supported by the observation that while all these unfair games are practically possible, the more unfair a game is the less likely we are to encounter it outside of a philosophy paper. You could probably come up with a formalization of unfairness, though it might be tricky to argue that it’s relevantly exhaustive and linear.
EDIT: (Just a note, you could argue all this without actually granting that my unfair game is practically possible, or that Newcomb’s problem is unfair, since the two-boxer will provide those premises.)
A theory that is incapable of dealing with agents that make decisions based on the projected reactions of other players, is worthless in the real world.
This seems to be the claim under dispute, and the question of fairness should be distinguished from the claim that Omega is doing something realistic or unrealistic. I think we agree that Newcomb-like situations are practically possible. But it may be that my unfair game is practically possible too, and that in principle no decision theory can come out maximizing utility in every practically possible game.
One response might be to say Newcomb’s problem is more unfair than the problem of simply choosing between two boxes containing different amounts of money, because Newcomb’s distribution of utility makes mention of the decision. Newcomb’s is unfair because it goes meta on the decider. My TDT punishing game is much more unfair than Newcomb’s because it goes one ‘meta’ level up from there, making mention of the decision theories.
You could argue that even if no decision theory can maximise in every arbitrarily unfair game, there are degrees of unfairness related to the degree to which the problem ‘goes meta’. We should just prefer the decision theory that can maximise the at the highest level of unfairness. This could probably be supported by the observation that while all these unfair games are practically possible, the more unfair a game is the less likely we are to encounter it outside of a philosophy paper. You could probably come up with a formalization of unfairness, though it might be tricky to argue that it’s relevantly exhaustive and linear.
EDIT: (Just a note, you could argue all this without actually granting that my unfair game is practically possible, or that Newcomb’s problem is unfair, since the two-boxer will provide those premises.)
A theory that is incapable of dealing with agents that make decisions based on the projected reactions of other players, is worthless in the real world.
However, an agent that makes decisions based on the fact that it perfectly predicts the reactions of other players does not exist in the real world.
Newcomb does not require a perfect predictor.
I know that the numbers in the canonical case work out to .5005 accuracy for the required; within noise of random.