(1) Why would Joe intend to use the random process in his decision? I’d assume that he wants million dollars much more than to prove Omega’s fallibility (and that only with 50% chance).
(2) Even if Joe for whatever reason prefers proving Omega’s fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.
(3) How is this
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
compatible with this
So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).
(emphasis mine)?
Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it’s rather fighting the hypo.
ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that’s it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.
ad 2: You can, I did not, I suspect—as pointed out—that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.
ad 3:
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, > since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
This assumed that I could somehow rule out stage magic. Did not say that, my mistake.
On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem
OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb’s problem (not entirely sure that I understand correctly what you meant by “dissolve”, though) is very confusing and prone to misinterpretation. Newcomb’s problem doesn’t rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
I still don’t understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don’t carry quantum widgets.
Newcomb’s problem doesn’t rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
This was probably just me (how I read / what I think is interesting about Newcomb’s problem). As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the “point” of Newcomb’s problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
I still don’t understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don’t carry quantum widgets.
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.
(1) Why would Joe intend to use the random process in his decision? I’d assume that he wants million dollars much more than to prove Omega’s fallibility (and that only with 50% chance).
(2) Even if Joe for whatever reason prefers proving Omega’s fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.
(3) How is this
compatible with this
(emphasis mine)?
Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it’s rather fighting the hypo.
ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that’s it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.
ad 2: You can, I did not, I suspect—as pointed out—that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.
ad 3:
This assumed that I could somehow rule out stage magic. Did not say that, my mistake.
On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem
OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb’s problem (not entirely sure that I understand correctly what you meant by “dissolve”, though) is very confusing and prone to misinterpretation. Newcomb’s problem doesn’t rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
I still don’t understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don’t carry quantum widgets.
This was probably just me (how I read / what I think is interesting about Newcomb’s problem). As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the “point” of Newcomb’s problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.