I agree that if I know the rules, I can reason “if I commit to one-box, Omega will predict I will one-box, so the money will be there”, and if I don’t know the rules, I can’t reason that way (since I can’t know the relationship between one-boxing and money).
It seems to me that if I don’t know the rules, I can similarly reason “if I commit to doing whatever I can do that gets me the most money, then Omega will predict that I will do whatever I can do that gets me the most money. If Omega sets up the rules such that I believe doing X gets me the most money, and I can do X, then Omega will predict that I will do X, and will act accordingly. In the standard formulation, unpredictably two-boxing gets me the most money, but because Omega is a superior predictor I can’t unpredictably two-box. Predictably one-boxing gets me the second-most money. Because of my precommitment, Omega will predict that upon being informed of the rules I will one-box, and the money will be there. ”
Now, I’m no kind of decision theory expert, so maybe there’s something about CDT that precludes reasoning in this way. So much the worse for CDT if so, since this seems like an entirely straightforward way to reason.
Incidentally, I don’t agree to the connotations of “jumped on.”
Checking the definition, it seems that “jump on” is more negative that I thought it was. I just meant both of you disagreed in similar way and fairly quickly; I didn’t feel reprimanded or attacked.
I do not understand at all the reasoning that follows “if I don’t know the rules”. If you are presented with the two boxes out of the blue and explained the rules then for the first time, there is no commitment to make (you have to decide in the moment) and the prediction has been made before, not after.
The best time to plant a tree is twenty years ago. The second-best time is now.
Similarly, the best time to commit to always doing whatever gets me the most utility in any given situation is at birth, but there’s no reason I shouldn’t commit to it now. I certainly don’t have to wait until someone presents me with two boxes.
Sure, I can and should commit to doing “whatever gets me the most utility”, but this is general and vague. And the detailed reasoning that follows in your parent comment is something I cannot do now if I have no conception of the problem. (In case it is not clear, I am assuming in my version that before being presented with the boxes and explained the rules, I am an innocent person who has never thought of the possibility of my choices being predicted, etc.)
Consider the proposition C: “Given a choice between A1 and A2, if the expected value of A1 exceeds the expected value of A2, I will perform A1.”
If I am too innocent to commit to C, then OK, maybe I’m unable to deal with Newcombe-like problems. But if I can commit to C, then… well, suppose I’ve done so.
Now Omega comes along, and for reasons of its own, it decides it’s going to offer me two boxes, with some cash in them, and the instructions: one-box for N1, or two-box for N2, where N1 > N2. Further, it’s going to put either N1 or N1+N2 in the boxes, depending on what it predicts I will do.
So, first, it must put money in the boxes. Which means, first, it must predict whether I’ll one-box or two-box, given those instructions.
Are we good so far?
Assuming we are: so OK, what is Omega’s prediction?
It seems to me that Omega will predict that I will, hypothetically, reason as follows: ”There are four theoretical possibilities. In order of profit, they are: 1: unpredictably two-box (nets me N1 + N2) 2: predictably one-box (nets me N1) 3: predictably two-box (nets me N2) 4: unpredictably one-box (nets me N2)
So clearly I ought to pick 1, if I can. But can I? Probably not, since Omega is a very good predictor. If I try to pick 1, I will likely end up with 3. Which means the expected value of picking 1 is less than the expected value of picking 2. So I should pick 2, if I can. But can I? Probably, since Omega is a very good predictor. If I try to pick 2, I will likely end up with 2. So I will pick 2.”
And, upon predicting that I will pick 2, Omega will put N1 + N2 in the boxes.
At this point, I have not yet been approached, am innocent, and have no conception of the problem.
Now, Omega approaches me, and what do you know: it was right! That is in fact how I reason once I’m introduced to the problem. So I one-box.
At this point, I would make more money if I two-box, but I am incapable of doing so… I’m not the sort of system that two-boxes. (If I had been, I most likely wouldn’t have reached this point.)
If there’s a flaw in this model, I would appreciate having it pointed out to me.
I agree that if I know the rules, I can reason “if I commit to one-box, Omega will predict I will one-box, so the money will be there”, and if I don’t know the rules, I can’t reason that way (since I can’t know the relationship between one-boxing and money).
It seems to me that if I don’t know the rules, I can similarly reason “if I commit to doing whatever I can do that gets me the most money, then Omega will predict that I will do whatever I can do that gets me the most money. If Omega sets up the rules such that I believe doing X gets me the most money, and I can do X, then Omega will predict that I will do X, and will act accordingly. In the standard formulation, unpredictably two-boxing gets me the most money, but because Omega is a superior predictor I can’t unpredictably two-box. Predictably one-boxing gets me the second-most money. Because of my precommitment, Omega will predict that upon being informed of the rules I will one-box, and the money will be there. ”
Now, I’m no kind of decision theory expert, so maybe there’s something about CDT that precludes reasoning in this way. So much the worse for CDT if so, since this seems like an entirely straightforward way to reason.
Incidentally, I don’t agree to the connotations of “jumped on.”
Checking the definition, it seems that “jump on” is more negative that I thought it was. I just meant both of you disagreed in similar way and fairly quickly; I didn’t feel reprimanded or attacked.
I do not understand at all the reasoning that follows “if I don’t know the rules”. If you are presented with the two boxes out of the blue and explained the rules then for the first time, there is no commitment to make (you have to decide in the moment) and the prediction has been made before, not after.
The best time to plant a tree is twenty years ago. The second-best time is now.
Similarly, the best time to commit to always doing whatever gets me the most utility in any given situation is at birth, but there’s no reason I shouldn’t commit to it now. I certainly don’t have to wait until someone presents me with two boxes.
Sure, I can and should commit to doing “whatever gets me the most utility”, but this is general and vague. And the detailed reasoning that follows in your parent comment is something I cannot do now if I have no conception of the problem. (In case it is not clear, I am assuming in my version that before being presented with the boxes and explained the rules, I am an innocent person who has never thought of the possibility of my choices being predicted, etc.)
Consider the proposition C: “Given a choice between A1 and A2, if the expected value of A1 exceeds the expected value of A2, I will perform A1.”
If I am too innocent to commit to C, then OK, maybe I’m unable to deal with Newcombe-like problems.
But if I can commit to C, then… well, suppose I’ve done so.
Now Omega comes along, and for reasons of its own, it decides it’s going to offer me two boxes, with some cash in them, and the instructions: one-box for N1, or two-box for N2, where N1 > N2. Further, it’s going to put either N1 or N1+N2 in the boxes, depending on what it predicts I will do.
So, first, it must put money in the boxes.
Which means, first, it must predict whether I’ll one-box or two-box, given those instructions.
Are we good so far?
Assuming we are: so OK, what is Omega’s prediction?
It seems to me that Omega will predict that I will, hypothetically, reason as follows:
”There are four theoretical possibilities. In order of profit, they are:
1: unpredictably two-box (nets me N1 + N2)
2: predictably one-box (nets me N1)
3: predictably two-box (nets me N2)
4: unpredictably one-box (nets me N2)
So clearly I ought to pick 1, if I can.
But can I?
Probably not, since Omega is a very good predictor. If I try to pick 1, I will likely end up with 3. Which means the expected value of picking 1 is less than the expected value of picking 2.
So I should pick 2, if I can.
But can I?
Probably, since Omega is a very good predictor. If I try to pick 2, I will likely end up with 2.
So I will pick 2.”
And, upon predicting that I will pick 2, Omega will put N1 + N2 in the boxes.
At this point, I have not yet been approached, am innocent, and have no conception of the problem.
Now, Omega approaches me, and what do you know: it was right! That is in fact how I reason once I’m introduced to the problem. So I one-box.
At this point, I would make more money if I two-box, but I am incapable of doing so… I’m not the sort of system that two-boxes. (If I had been, I most likely wouldn’t have reached this point.)
If there’s a flaw in this model, I would appreciate having it pointed out to me.