In Newcomb’s scenario, an agent that believes they have a probability of 99.9% of being able to fool Omega should two-box. They’re wrong and will only get $1000 instead of $1000000, but that’s a cost of having wildly inaccurate beliefs about the world they’re in, not a criticism of any particular decision theory.
Setting up a scenario in which the agent has true beliefs about the world isolates the effect of the decision theory for analysis, without mixing in a bunch of extraneous factors. Likewise for the fairness assumption that says that the payoff distribution is correlated only with the agents’ strategies and not the process by which they arrive at those strategies.
Violating those assumptions does allow a broader range of scenarios, but doesn’t appear to help in the evaluation of decision theories. It’s already a difficult enough field of study without throwing in stuff like that.
In Newcomb’s scenario, an agent that believes they have a probability of 99.9% of being able to fool Omega should two-box. They’re wrong and will only get $1000 instead of $1000000, but that’s a cost of having wildly inaccurate beliefs about the world they’re in, not a criticism of any particular decision theory.
Setting up a scenario in which the agent has true beliefs about the world isolates the effect of the decision theory for analysis, without mixing in a bunch of extraneous factors. Likewise for the fairness assumption that says that the payoff distribution is correlated only with the agents’ strategies and not the process by which they arrive at those strategies.
Violating those assumptions does allow a broader range of scenarios, but doesn’t appear to help in the evaluation of decision theories. It’s already a difficult enough field of study without throwing in stuff like that.