The best time to plant a tree is twenty years ago. The second-best time is now.
Similarly, the best time to commit to always doing whatever gets me the most utility in any given situation is at birth, but there’s no reason I shouldn’t commit to it now. I certainly don’t have to wait until someone presents me with two boxes.
Sure, I can and should commit to doing “whatever gets me the most utility”, but this is general and vague. And the detailed reasoning that follows in your parent comment is something I cannot do now if I have no conception of the problem. (In case it is not clear, I am assuming in my version that before being presented with the boxes and explained the rules, I am an innocent person who has never thought of the possibility of my choices being predicted, etc.)
Consider the proposition C: “Given a choice between A1 and A2, if the expected value of A1 exceeds the expected value of A2, I will perform A1.”
If I am too innocent to commit to C, then OK, maybe I’m unable to deal with Newcombe-like problems. But if I can commit to C, then… well, suppose I’ve done so.
Now Omega comes along, and for reasons of its own, it decides it’s going to offer me two boxes, with some cash in them, and the instructions: one-box for N1, or two-box for N2, where N1 > N2. Further, it’s going to put either N1 or N1+N2 in the boxes, depending on what it predicts I will do.
So, first, it must put money in the boxes. Which means, first, it must predict whether I’ll one-box or two-box, given those instructions.
Are we good so far?
Assuming we are: so OK, what is Omega’s prediction?
It seems to me that Omega will predict that I will, hypothetically, reason as follows: ”There are four theoretical possibilities. In order of profit, they are: 1: unpredictably two-box (nets me N1 + N2) 2: predictably one-box (nets me N1) 3: predictably two-box (nets me N2) 4: unpredictably one-box (nets me N2)
So clearly I ought to pick 1, if I can. But can I? Probably not, since Omega is a very good predictor. If I try to pick 1, I will likely end up with 3. Which means the expected value of picking 1 is less than the expected value of picking 2. So I should pick 2, if I can. But can I? Probably, since Omega is a very good predictor. If I try to pick 2, I will likely end up with 2. So I will pick 2.”
And, upon predicting that I will pick 2, Omega will put N1 + N2 in the boxes.
At this point, I have not yet been approached, am innocent, and have no conception of the problem.
Now, Omega approaches me, and what do you know: it was right! That is in fact how I reason once I’m introduced to the problem. So I one-box.
At this point, I would make more money if I two-box, but I am incapable of doing so… I’m not the sort of system that two-boxes. (If I had been, I most likely wouldn’t have reached this point.)
If there’s a flaw in this model, I would appreciate having it pointed out to me.
The best time to plant a tree is twenty years ago. The second-best time is now.
Similarly, the best time to commit to always doing whatever gets me the most utility in any given situation is at birth, but there’s no reason I shouldn’t commit to it now. I certainly don’t have to wait until someone presents me with two boxes.
Sure, I can and should commit to doing “whatever gets me the most utility”, but this is general and vague. And the detailed reasoning that follows in your parent comment is something I cannot do now if I have no conception of the problem. (In case it is not clear, I am assuming in my version that before being presented with the boxes and explained the rules, I am an innocent person who has never thought of the possibility of my choices being predicted, etc.)
Consider the proposition C: “Given a choice between A1 and A2, if the expected value of A1 exceeds the expected value of A2, I will perform A1.”
If I am too innocent to commit to C, then OK, maybe I’m unable to deal with Newcombe-like problems.
But if I can commit to C, then… well, suppose I’ve done so.
Now Omega comes along, and for reasons of its own, it decides it’s going to offer me two boxes, with some cash in them, and the instructions: one-box for N1, or two-box for N2, where N1 > N2. Further, it’s going to put either N1 or N1+N2 in the boxes, depending on what it predicts I will do.
So, first, it must put money in the boxes.
Which means, first, it must predict whether I’ll one-box or two-box, given those instructions.
Are we good so far?
Assuming we are: so OK, what is Omega’s prediction?
It seems to me that Omega will predict that I will, hypothetically, reason as follows:
”There are four theoretical possibilities. In order of profit, they are:
1: unpredictably two-box (nets me N1 + N2)
2: predictably one-box (nets me N1)
3: predictably two-box (nets me N2)
4: unpredictably one-box (nets me N2)
So clearly I ought to pick 1, if I can.
But can I?
Probably not, since Omega is a very good predictor. If I try to pick 1, I will likely end up with 3. Which means the expected value of picking 1 is less than the expected value of picking 2.
So I should pick 2, if I can.
But can I?
Probably, since Omega is a very good predictor. If I try to pick 2, I will likely end up with 2.
So I will pick 2.”
And, upon predicting that I will pick 2, Omega will put N1 + N2 in the boxes.
At this point, I have not yet been approached, am innocent, and have no conception of the problem.
Now, Omega approaches me, and what do you know: it was right! That is in fact how I reason once I’m introduced to the problem. So I one-box.
At this point, I would make more money if I two-box, but I am incapable of doing so… I’m not the sort of system that two-boxes. (If I had been, I most likely wouldn’t have reached this point.)
If there’s a flaw in this model, I would appreciate having it pointed out to me.