I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.
Hmm. Thinking about this convinces me that there’s a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb’s Problem to play, there are really 4 possible actions, not 2:
intended to one-box, one-boxed
intended to one-box, two-boxed
intended to two-box, one-boxed
intended to two-box, two-boxed
I don’t know if the usual statement of Newcomb’s problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that’s a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we’re just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.
We shouldn’t ask, “Should you two-box?”, but, “Should you two-box now, given how you would have acted earlier?” The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, ’coz that $1M probably ain’t there.
So Newcomb’s problem isn’t a paradox. If we’re talking just about the final decision, the one made by a subject after Omega’s prediction, then the subject should probably two-box (as argued in the post). If we’re talking about two decisions, one before and one after the box-opening, then all we’re asking is whether you can convince Omega that you’re going to one-box if you aren’t. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.
I don’t think this gets Parfit’s Hitchhiker right. You need a decision theory that, when safely returned to the city, pays the rescuer even though they have no external obligation to do so. Otherwise they won’t have rescued you.
I don’t think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here—as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)
Your comment implies you’re talking about policy, which must be modelled as an iterated game. I don’t deny that one-boxing is good in the iterated game.
My concern in this post is that there’s been a lack of distinction in the community between “one-boxing is the best policy” and “one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment.” This lack of distinction has led many people into wishful or magical rather than rational thinking.
I don’t think that what you need has any bearing on what reality has actually given you.
As far as I can tell, I would pay Parfit’s Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.
or by sneaking in different metaphysics
This seems wrong to me, if you’re explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it’s not obvious that the latter leads to better decisions.
I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.
Hmm. Thinking about this convinces me that there’s a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb’s Problem to play, there are really 4 possible actions, not 2:
intended to one-box, one-boxed
intended to one-box, two-boxed
intended to two-box, one-boxed
intended to two-box, two-boxed
I don’t know if the usual statement of Newcomb’s problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that’s a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we’re just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.
We shouldn’t ask, “Should you two-box?”, but, “Should you two-box now, given how you would have acted earlier?” The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, ’coz that $1M probably ain’t there.
So Newcomb’s problem isn’t a paradox. If we’re talking just about the final decision, the one made by a subject after Omega’s prediction, then the subject should probably two-box (as argued in the post). If we’re talking about two decisions, one before and one after the box-opening, then all we’re asking is whether you can convince Omega that you’re going to one-box if you aren’t. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.
I don’t think this gets Parfit’s Hitchhiker right. You need a decision theory that, when safely returned to the city, pays the rescuer even though they have no external obligation to do so. Otherwise they won’t have rescued you.
I don’t think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here—as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)
Your comment implies you’re talking about policy, which must be modelled as an iterated game. I don’t deny that one-boxing is good in the iterated game.
My concern in this post is that there’s been a lack of distinction in the community between “one-boxing is the best policy” and “one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment.” This lack of distinction has led many people into wishful or magical rather than rational thinking.
As far as I can tell, I would pay Parfit’s Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.
This seems wrong to me, if you’re explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it’s not obvious that the latter leads to better decisions.