If I one-box then (ignoring throughout the tiny probabilities of Omega being wrong) the number is prime. I receive $1M from Omega and $0 from the Lottery.
If I two-box then the number is composite, Omega pays me $1K, and the Lottery pays me $2M.
I think this line of reasoning relies on the Number Lottery’s choice of number being conditional on Omega’s evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery’s number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.
Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.
Of course! I meant to say that Richard’s line of thought was mistaken because it didn’t take into account the (default) independence of Omega’s choice of number and the Number Lottery’s choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.
I think this line of reasoning relies on the Number Lottery’s choice of number being conditional on Omega’s evaluation of you as a one-boxer or two-boxer.
What? I can’t even parse that.
There IS a number in the box which is the same as the one at the Lottery Bank. The number either is prime or it is composite.
According to the hypothetical, if I two-box, there is a 99.9% correlation with Omega putting a composite number in his box, in which case my payooff is $2,001,000. There is a 0.1% correlation with Omega putting a prime number in the box in which case my payoffis $1,001,000. If the correlation is a good estimate of probability, then my expected payoff from two-boxing is $2million more or less. If I one-box, blah blah blah expected payoff is $1million.
Sorry for my poor phrasing. The Number Lottery’s number is randomly chosen and has nothing to do with Omega’s prediction of you as a two-boxer or one-boxer. It is only Omega’s choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?
Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega’s choice of number and the Lottery’s choice of number are no longer independent.
A crucial difference between the two problems is that in the CGTA problem I can distinguish among an inclination to chew gum, the act of chewing gum, the decision to chew gum, and the means whereby I reach that decision. If the correct causal story is that CGTA causes both abscesses and an inclination to chew gum, and that gum-chewing is protective against abscesses in the entire population, then the correct decision is to chew gum. The presence of the inclination is bad news, but not the decision, given knowledge of the inclination. This is the causal diagram:
My action in looking at this diagram and making a decision on that basis does not appear in the diagram itself. If it were added, it would be only as part of an “Other factors” node with an arrow only to “Action of chewing gum”. This introduces no problematic entanglements between my decision processes and the avoidance of abscesses. Perfect rationality is assumed, of course, so that when considering the health risks or benefits of chewing gum, I do not let my decision be irrationally swayed by the presence or absence of the inclination, only rationally swayed by whatever the actual causal relationships are.
In the present thought-experiment, it is an essential part of the setup that these distinctions are not possible. My actual decision processes, including those resulting from considering the causal structure of the problem, are by hypothesis a part of that causal structure. That is what Omega uses to make its prediction. This is the causal diagram I get:
Notice the arrow from the diagram itself to my decision-making, which makes a non-standard sort of causal diagram. To actually prove that some strategy is correct for this problem requires formalising such self-referential decision problems. I am not aware that anyone has succeeded in doing this. Formalising such reasoning is part of the FAI problem that MIRI works on.
Having drawn that causal diagram, I’m now not sure the original problem is consistently formulated. As stated, Omega’s number has no causal entanglement with the Lottery’s number. But my decision is so entangled, and quite strongly. For example, if the Lottery chooses a number whose primeness I can instantly judge, I know what I will get from the Lottery, and if Omega has chosen a different number whose primeness I cannot judge, the problem is then reduced to plain Newcomb and I one-box. If Omega’s decision shows such perfect statisical dependence with mine, and mine is strongly statisically dependent with the Lottery, Omega’s decision cannot be statistically independent of the Lottery. But by the Markov assumption, dependence implies causal entanglement. So must that assumption be dropped in self-referential causal decision theory? Something for MIRI to think about.
The case where I can see at a glance whether Omega’s number is prime is like Newcomb with a transparent box B. I (me, not the hypothetical participant) decline to offer a decision in that case.
This is my reasoning exactly, as I reasoned it out, before reading comments. This seems like the “obvious” answer, at least in the timeless sense. Of course, when reading a problem like this, one more thing you have to bear in mind (outside the problem, of course, as inside the problem you do not have this additional information) is that the obvious solution is unlikely to be fully correct, or else what would be the point of Eliezer posing the problem in the first place… If one is going to accept timeless decision making, then I see no reason not to accept it in cases where it is anchored on fundamental mathematical facts rather than “just” on physical facts like whether or not you will choose to take the money, as in the standard Newcomb’s paradox.
This seems like the “obvious” answer, at least in the timeless sense.
Maybe, but obvious answers to such problems are often wrong. Often, multiple different answers are each obviously the exclusively right answer. And look at all the people in this thread one-boxing. Not so obvious to them.
My reasoning was as stated, but I’m not going to use its “obviousness” as an additional argument in favour of it. And on reading the comments and the Facebook thread, I notice that I have neglected to consider the hypothetical situations in which the two numbers are different. On considering it, it seems that I should still argue as I did, using all the available information, i.e. that on this occasion the two numbers are the same. But it is merely obvious to me that this is so; I am not at all certain.
the obvious solution is unlikely to be fully correct, or else what would be the point of Eliezer posing the problem in the first place...
I’m disinclined to guess the right answer on the basis of predicting the hidden purposes of someone smarter than me. But I can, as it happens, think of a reason for posing a question whose “obvious” solution is completely right. It could be just the first of a garden path series of puzzles for which the “obvious” solutions are collectively inconsistent with any known decision theory.
Posting before reading comments:
If I one-box then (ignoring throughout the tiny probabilities of Omega being wrong) the number is prime. I receive $1M from Omega and $0 from the Lottery.
If I two-box then the number is composite, Omega pays me $1K, and the Lottery pays me $2M.
Therefore I two-box.
I think this line of reasoning relies on the Number Lottery’s choice of number being conditional on Omega’s evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery’s number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.
Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.
Those are not the only precommitments one can make for this type of situation.
Of course! I meant to say that Richard’s line of thought was mistaken because it didn’t take into account the (default) independence of Omega’s choice of number and the Number Lottery’s choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.
What? I can’t even parse that.
There IS a number in the box which is the same as the one at the Lottery Bank. The number either is prime or it is composite.
According to the hypothetical, if I two-box, there is a 99.9% correlation with Omega putting a composite number in his box, in which case my payooff is $2,001,000. There is a 0.1% correlation with Omega putting a prime number in the box in which case my payoffis $1,001,000. If the correlation is a good estimate of probability, then my expected payoff from two-boxing is $2million more or less. If I one-box, blah blah blah expected payoff is $1million.
Sorry for my poor phrasing. The Number Lottery’s number is randomly chosen and has nothing to do with Omega’s prediction of you as a two-boxer or one-boxer. It is only Omega’s choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?
Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega’s choice of number and the Lottery’s choice of number are no longer independent.
By that logic, you would not chew gum in the CGTA problem.
A crucial difference between the two problems is that in the CGTA problem I can distinguish among an inclination to chew gum, the act of chewing gum, the decision to chew gum, and the means whereby I reach that decision. If the correct causal story is that CGTA causes both abscesses and an inclination to chew gum, and that gum-chewing is protective against abscesses in the entire population, then the correct decision is to chew gum. The presence of the inclination is bad news, but not the decision, given knowledge of the inclination. This is the causal diagram:
My action in looking at this diagram and making a decision on that basis does not appear in the diagram itself. If it were added, it would be only as part of an “Other factors” node with an arrow only to “Action of chewing gum”. This introduces no problematic entanglements between my decision processes and the avoidance of abscesses. Perfect rationality is assumed, of course, so that when considering the health risks or benefits of chewing gum, I do not let my decision be irrationally swayed by the presence or absence of the inclination, only rationally swayed by whatever the actual causal relationships are.
In the present thought-experiment, it is an essential part of the setup that these distinctions are not possible. My actual decision processes, including those resulting from considering the causal structure of the problem, are by hypothesis a part of that causal structure. That is what Omega uses to make its prediction. This is the causal diagram I get:
Notice the arrow from the diagram itself to my decision-making, which makes a non-standard sort of causal diagram. To actually prove that some strategy is correct for this problem requires formalising such self-referential decision problems. I am not aware that anyone has succeeded in doing this. Formalising such reasoning is part of the FAI problem that MIRI works on.
Having drawn that causal diagram, I’m now not sure the original problem is consistently formulated. As stated, Omega’s number has no causal entanglement with the Lottery’s number. But my decision is so entangled, and quite strongly. For example, if the Lottery chooses a number whose primeness I can instantly judge, I know what I will get from the Lottery, and if Omega has chosen a different number whose primeness I cannot judge, the problem is then reduced to plain Newcomb and I one-box. If Omega’s decision shows such perfect statisical dependence with mine, and mine is strongly statisically dependent with the Lottery, Omega’s decision cannot be statistically independent of the Lottery. But by the Markov assumption, dependence implies causal entanglement. So must that assumption be dropped in self-referential causal decision theory? Something for MIRI to think about.
The case where I can see at a glance whether Omega’s number is prime is like Newcomb with a transparent box B. I (me, not the hypothetical participant) decline to offer a decision in that case.
That’s the same diagram I drew for UNP! :D
This is my reasoning exactly, as I reasoned it out, before reading comments. This seems like the “obvious” answer, at least in the timeless sense. Of course, when reading a problem like this, one more thing you have to bear in mind (outside the problem, of course, as inside the problem you do not have this additional information) is that the obvious solution is unlikely to be fully correct, or else what would be the point of Eliezer posing the problem in the first place… If one is going to accept timeless decision making, then I see no reason not to accept it in cases where it is anchored on fundamental mathematical facts rather than “just” on physical facts like whether or not you will choose to take the money, as in the standard Newcomb’s paradox.
Maybe, but obvious answers to such problems are often wrong. Often, multiple different answers are each obviously the exclusively right answer. And look at all the people in this thread one-boxing. Not so obvious to them.
My reasoning was as stated, but I’m not going to use its “obviousness” as an additional argument in favour of it. And on reading the comments and the Facebook thread, I notice that I have neglected to consider the hypothetical situations in which the two numbers are different. On considering it, it seems that I should still argue as I did, using all the available information, i.e. that on this occasion the two numbers are the same. But it is merely obvious to me that this is so; I am not at all certain.
I’m disinclined to guess the right answer on the basis of predicting the hidden purposes of someone smarter than me. But I can, as it happens, think of a reason for posing a question whose “obvious” solution is completely right. It could be just the first of a garden path series of puzzles for which the “obvious” solutions are collectively inconsistent with any known decision theory.
Upvoted for awesome epigram.