As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.