Really? I feel like I would be more inclined to two-box in the real life scenario. There will be two physical boxes in front of me that already have money in them (or not). It’ll just be me and two boxes whose contents are already fixed. I will really want to just take them both.
I was surprised by the more general statement “that in a real-life situation even philosophers would one-box.” In the specific example of an iterated Newcomb (or directly observing the results of others) I agree that two-boxers would probably move towards a one-box strategy.
The reason for this, at least as far as I can introspect, has to do with the saliency of actually experiencing a Newcomb situation. When reasoning about the problem in the abstract I can easily conclude that one-boxing is the obviously correct answer. However, when I sit and really try to imagine the two boxes sitting in front of me, my model of myself in that situation two-boxes more than the person sitting at his computer. I think a similar effect may be at play when I imagine myself physically present as person after person two-boxes and finds one of the boxes empty.
So I think we agree that observe(many two-box failures) --> more likely to one-box.
I do think that experiencing the problem as traditionally stated (no iteration or actually watching other people) will have a relationship of observe(two physical boxes, predictor gone) --> more likely to two-box.
The second effect is probably weak as I think I would be able to override the impulse to two-box with fairly high probability.
I was surprised by the more general statement “that in a real-life situation even philosophers would one-box.”
By a “real-life situation” I meant a Newcomb-like problem we routinely face but don’t recognize as such, like deciding on the next move in a poker game, or on the next play in a sports game. Whenever I face a situation where my opponent has likely precommitted to a course of action based on their knowledge of me, and I have reliable empirical evidence of that knowledge, and betting against such evidence carries both risks and rewards, I am in a Newcomb situation.
I don’t see how those are Newcomb situations at all. When I try to come up with an example of a Newcomb-like sports situation (eg football since plays are preselected and revealed simultaneously more or less) I get something like the following:
you have two plays A and B (one-box, two-box)
the opposing coach has two plays X and Y
if the opposing coach predicts you will select A they will select X and if they predict you will select B they will select Y.
A vs X results in a moderate gain for you.
A vs Y results in no gain for you.
B vs Y results in a small gain for you.
B vs X results in a large gain for you.
You both know all this.
The problem lies in the 3rd assumption. Why would the opposing coach ever select play X? Symmetrically, if Omega was actually competing against you and trying to minimize your winnings why would it ever put a million dollars in the second box.
Newcomb’s works, in part, due to Omega’s willingness to select a dominated strategy in order to mess with you. What real-life situation involves an opponent like that?
Newcomb’s problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb’s problem bears some vital semblance to the prisoners’ dilemma, which occurs in real life.
Sure, I didn’t mean to imply that there were literally zero situations that could be described as Newcomb-like (though I think that particular example is a questionable fit). I just think they are extremely rare (particularly in a competitive context such as poker or sports).
edit: That example is more like a prisoner’s dilemma where Kate gets to decide her move after seeing Joe’s. Agree that Newcomb’s definitely has similarities with the relatively common PD.
Oddly enough, that problem is also solved better by a time-variable agent: Joe proposes sincerely, being an agent who would never back out of a commitment of this level. If his marriage turns out poorly enough, Joe, while remaining the same agent that used to wouldn’t back out, backs out.
And the prisoners’ dilemma as it is written cannot occur in real life, because it requires no further interaction between the agents.
If I have even a little bit of reason to believe the problem is newcomboid (like, I saw it make two or three successful predictions, and no unsuccessful ones, or I know the omega would face bad consequences for predicting wrongly (even just in terms of reputation), or I know the omega studied me well), I’d one box in real-life too.
Well, I am referring specifically to an instinctive/emotional impulse driven by the heavily ingrained belief that money does not appear or disappear from closed boxes. If you don’t experience that impulse or will always be able to override it then yes, one-boxing in real life would be just as easy as in the abstract.
As per my above response to shminux, I think this effect would be diminished and eventually reversed after personally observing enough successful predictions.
Really? I feel like I would be more inclined to two-box in the real life scenario. There will be two physical boxes in front of me that already have money in them (or not). It’ll just be me and two boxes whose contents are already fixed. I will really want to just take them both.
Maybe the first time. What will you do the second time?
I was surprised by the more general statement “that in a real-life situation even philosophers would one-box.” In the specific example of an iterated Newcomb (or directly observing the results of others) I agree that two-boxers would probably move towards a one-box strategy.
The reason for this, at least as far as I can introspect, has to do with the saliency of actually experiencing a Newcomb situation. When reasoning about the problem in the abstract I can easily conclude that one-boxing is the obviously correct answer. However, when I sit and really try to imagine the two boxes sitting in front of me, my model of myself in that situation two-boxes more than the person sitting at his computer. I think a similar effect may be at play when I imagine myself physically present as person after person two-boxes and finds one of the boxes empty.
So I think we agree that observe(many two-box failures) --> more likely to one-box.
I do think that experiencing the problem as traditionally stated (no iteration or actually watching other people) will have a relationship of observe(two physical boxes, predictor gone) --> more likely to two-box.
The second effect is probably weak as I think I would be able to override the impulse to two-box with fairly high probability.
By a “real-life situation” I meant a Newcomb-like problem we routinely face but don’t recognize as such, like deciding on the next move in a poker game, or on the next play in a sports game. Whenever I face a situation where my opponent has likely precommitted to a course of action based on their knowledge of me, and I have reliable empirical evidence of that knowledge, and betting against such evidence carries both risks and rewards, I am in a Newcomb situation.
I don’t see how those are Newcomb situations at all. When I try to come up with an example of a Newcomb-like sports situation (eg football since plays are preselected and revealed simultaneously more or less) I get something like the following:
you have two plays A and B (one-box, two-box)
the opposing coach has two plays X and Y
if the opposing coach predicts you will select A they will select X and if they predict you will select B they will select Y.
A vs X results in a moderate gain for you. A vs Y results in no gain for you. B vs Y results in a small gain for you. B vs X results in a large gain for you.
You both know all this.
The problem lies in the 3rd assumption. Why would the opposing coach ever select play X? Symmetrically, if Omega was actually competing against you and trying to minimize your winnings why would it ever put a million dollars in the second box.
Newcomb’s works, in part, due to Omega’s willingness to select a dominated strategy in order to mess with you. What real-life situation involves an opponent like that?
Newcomb’s problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb’s problem bears some vital semblance to the prisoners’ dilemma, which occurs in real life.
And Parfit’s Hitchhiker scenarios, and blackmail attempts, not to mention voting.
Sure, I didn’t mean to imply that there were literally zero situations that could be described as Newcomb-like (though I think that particular example is a questionable fit). I just think they are extremely rare (particularly in a competitive context such as poker or sports).
edit: That example is more like a prisoner’s dilemma where Kate gets to decide her move after seeing Joe’s. Agree that Newcomb’s definitely has similarities with the relatively common PD.
Oddly enough, that problem is also solved better by a time-variable agent: Joe proposes sincerely, being an agent who would never back out of a commitment of this level. If his marriage turns out poorly enough, Joe, while remaining the same agent that used to wouldn’t back out, backs out.
And the prisoners’ dilemma as it is written cannot occur in real life, because it requires no further interaction between the agents.
If I have even a little bit of reason to believe the problem is newcomboid (like, I saw it make two or three successful predictions, and no unsuccessful ones, or I know the omega would face bad consequences for predicting wrongly (even just in terms of reputation), or I know the omega studied me well), I’d one box in real-life too.
Well, I am referring specifically to an instinctive/emotional impulse driven by the heavily ingrained belief that money does not appear or disappear from closed boxes. If you don’t experience that impulse or will always be able to override it then yes, one-boxing in real life would be just as easy as in the abstract.
As per my above response to shminux, I think this effect would be diminished and eventually reversed after personally observing enough successful predictions.