We also have the information that our decision won’t affect what’s in the boxes, and we should also take that into account.
The only thing that our decision determines is whether we’ll get X or X+1000 dollars. It does not determine the value of X.
If X were determined by, say, flipping a coin, should a rational agent one-box or two-box? Two-box, obviously, because there’s not a damn thing he can do to affect the value of X.
So why choose differently when X is determined by the kind of brain the agent has? When the time to make a decision comes, there still isn’t a damn thing he can do to affect the value of X!
The only difference between the two scenarios above is that in the second one the thing that determines the value of X also happens to be the thing that determines the decision the agent will make. This creates the illusion that the decision determines X, but it doesn’t.
Two-boxing is always the best decision. Why wouldn’t it be? The agent will get a 1000 dollars more than he would have gotten otherwise. Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have, which will in turn affect the value of X, but that decision is outside the scope of Newcomb’s problem.
Still, if the agent had pre-commited to one-boxing, shouldn’t he two-box once he’s on the spot? That’s a wrong question. If he really pre-commited to one-boxing, he won’t be able to choose differently. No, that’s not quite right. If the agent really pre-commited to one-boxing, he won’t even have to make the decision to stick to his previous decision. With or without pre-commitment, there is only one decision to be made, though at different times. If you have a Newcombian decision to make, you should always two-box, but if you pre-commmited you won’t have a Newcombian decision to make in Newcomb’s problem; actually, for that reason, it won’t really be Newcomb’s problem… or a problem of any kind, for that matter.
We also have the information that our decision won’t affect what’s in the boxes, and we should also take that into account.
The only thing that our decision determines is whether we’ll get X or X+1000 dollars. It does not determine the value of X.
If X were determined by, say, flipping a coin, should a rational agent one-box or two-box? Two-box, obviously, because there’s not a damn thing he can do to affect the value of X.
So why choose differently when X is determined by the kind of brain the agent has? When the time to make a decision comes, there still isn’t a damn thing he can do to affect the value of X!
The only difference between the two scenarios above is that in the second one the thing that determines the value of X also happens to be the thing that determines the decision the agent will make. This creates the illusion that the decision determines X, but it doesn’t.
Two-boxing is always the best decision. Why wouldn’t it be? The agent will get a 1000 dollars more than he would have gotten otherwise. Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have, which will in turn affect the value of X, but that decision is outside the scope of Newcomb’s problem.
Still, if the agent had pre-commited to one-boxing, shouldn’t he two-box once he’s on the spot? That’s a wrong question. If he really pre-commited to one-boxing, he won’t be able to choose differently. No, that’s not quite right. If the agent really pre-commited to one-boxing, he won’t even have to make the decision to stick to his previous decision. With or without pre-commitment, there is only one decision to be made, though at different times. If you have a Newcombian decision to make, you should always two-box, but if you pre-commmited you won’t have a Newcombian decision to make in Newcomb’s problem; actually, for that reason, it won’t really be Newcomb’s problem… or a problem of any kind, for that matter.