Suppose that I beat up all rational people so that they get less utility. This would not make rationality irrational. It would just mean that the world is bad for the rational. The question you’ve described might be a fine one, but it’s not what philosophers are arguing about in Newcombe’s problem. If Eliezer claims to have revolutionized decision theory, and then doesn’t even know enough about decision theory to know that he is answering a different question from the decision theorists, that is an utter embarrassment that significantly undermines his credibility.
And in that case, Newcombe’s problem becomes trivial. Of course if Newcombe’s problem comes up a lot, you should design agents that one box—they get more average utility. The question is about what’s rational for the agent to do, not what’s rational for it to commit to, become, or what’s rational for its designers to do.
Suppose that I beat up all rational people so that they get less utility. This would not make rationality irrational. It would just mean that the world is bad for the rational. The question you’ve described might be a fine one, but it’s not what philosophers are arguing about in Newcombe’s problem. If Eliezer claims to have revolutionized decision theory, and then doesn’t even know enough about decision theory to know that he is answering a different question from the decision theorists, that is an utter embarrassment that significantly undermines his credibility.
And in that case, Newcombe’s problem becomes trivial. Of course if Newcombe’s problem comes up a lot, you should design agents that one box—they get more average utility. The question is about what’s rational for the agent to do, not what’s rational for it to commit to, become, or what’s rational for its designers to do.