Newcomb’s problem doesn’t rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
This was probably just me (how I read / what I think is interesting about Newcomb’s problem). As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the “point” of Newcomb’s problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
I still don’t understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don’t carry quantum widgets.
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.
This was probably just me (how I read / what I think is interesting about Newcomb’s problem). As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the “point” of Newcomb’s problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.