I don’t think most people one-box. Maybe most LW readers one-box.
I have two real boxes, labelled with Newcomb’s problem and using 1 and 4 quarters in place of the $10k and $1M. I have shown them to people at Less Wrong meetups, and also to various friends of mine, a total of about 20 people.
Almost everyone I’ve tried it on has one-boxed. Even though I left out the part in the description about being a really accurate predictor, and pre-seeded the boxes before I even knew who would be the one choosing. Maybe it would be different with $10k instead of $0.25. Maybe my friends are unusual and a different demographic would two-box. Maybe it’s due to a quirk of how I present them. But unless someone presents contrary evidence, I have to conclude that most people are one-boxers.
Almost everyone I’ve tried it on has one-boxed. Even though I left out the part in the description about being a really accurate predictor, and pre-seeded the boxes before I even knew who would be the one choosing.
What?!? You offer people two boxes with essentially random amounts of money in them, and they choose to take one of the boxes instead of both? And these people are otherwise completely sane?
Could you maybe give us details of how exactly you present the problem? I can’t imagine any presentation that would make anyone even slightly tempted to one-box this variant. (Maybe if I knew I’d get to play again one day...)
That seems bizarre to me too. But if Jimrandomh is filling his boxes on the basis of what most people would do, and most people do one-box, then perhaps they are just behaving as rational, highly correlated, timeless decisionmakers.
A signalling explanation might explain this behavior: people would rather be seen as having gotten the problem correct, or signal non-greediness, than get an extra $0.25. As evidence for this conclusion, some people turn down the $1.00 in box one.
No one’s given the real correct solution, which is “inspect the boxes more thoroughly”. One of them has an extra label on the bottom, offering an extra $1.00 for finding it if you haven’t opened any boxes yet, which I’ve never had to pay out on. The moral is supposed to be that theory is hard to transfer into the real world and to question assumptions.
Reminds me of a story, set in a lazy Mark Twain river town. Two friends walking down the street. First says to second, “See that kid? He is really stupid.” Second asks, “Why do you say that?” First answers, “Watch”. Approaches kid. Holds out nickel in one hand and dime in the other. Asks kid which he prefers. “I’ll take the nickel. It’s bigger”. Man hands nickel to kid with smirk, and the two friends continue on.
Later the second man comes back and attempts to instruct the kid. “A dime is worth twice the value, that is it buys more candy”, says he, “even though the nickel looks bigger.” The kid gives the man a pitying look. “Ok, if you say so. But I’ve made seven nickels so far this month. How many dimes have you made?”
Which brings me to my real point—empirical research, I’m sure you have seen it, in which player 1 is asked to specify a split of $10 between himself and player 2. Player 2 then chooses to accept or reject. If he rejects, neither player gets anything. As I recall, when greedy player 1 specifies more than about 70% for himself, player 2 frequently rejects even though he is costing himself money. This can only be understood in classical “rational agent” game theory by postulating that player 2 does not believe researcher claims that the game is a one-shot.
What is the point? Well, perhaps people who have read about Newcomb problems are assuming (like most people in the research) that, somehow or other, greed will be punished.
Is it plausible that evolution would gradually push those 70% down to 30% or even lower, given enough time? There may not yet have been enough time for a strong enough group selection in evolution to create such an effect, but sooner or later it should happen, shouldn’t it? I’m thinking a species with such a great degree of selflessness would be more likely to survive than the present humanity is, because a larger percentage of them would cooperate about existential risk reduction than is the case in present humanity. Yet, 10-30% is still not 0%, so even with 10% there would still be enough of selfishness to make sure they wouldn’t end up refusing each other’s gifts until they all starve to death or something.
Can group selection of genes for different psychological constitution in humans already explain why player 1 takes only 70% and not, say, at least 90%, on average, in the game you describe?
What do chimps do? Does a chimp player 1 take more or less than 70%?
First of all, from the standpoint of the good of the group, I see no reason why player1 shouldn’t keep 100% of the money. After all, it is not as if player 2 were starving, and surely the good of player 1 is just as important to the good of the group as is the good of player 2. There is almost no reason for sharing from a standpoint of either Bentham-style utilitarianism or good-of-the-group.
However, there is a reason for sharing when you realize that player 2 is quite reasonably selfish, and has the power to make your life miserable. So, go ahead and give the jerk what he asks for. It is certainly to your own selfish advantage to do so. As long as he doesn’t get too greedy.
pre-seeded the boxes before I even knew who would be the one choosing
If I met someone in real life who was doing this trick (at least before I started spilling my opinions to the universe through my comments to this blog), I would strongly suspect that you were doing exactly this. And then I would definitely pick both boxes. (Well, first I’d try to figure out if you’re likely to offer me any more games, and I’d pick two boxes if I was fairly confident that you would not.) And I would get all of the money, since you would have predicted that I would pick only one box (assuming that you really seed them based on your honest best prediction).
On the other hand, if the situation is not presented as a game (even when I still don’t expect any iteration), I pretty consistently cooperate on all of the standard examples (prisoner’s dilemma, etc). But since feeling like a moral and cooperative person (except when playing games, of course) has high utility for me, I’m not really playing prisoner’s dilemma (etc) after all, so never mind.
This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb’s problem, then a predictor that always predicts one boxing could be suprisingly accurate.
I have two real boxes, labelled with Newcomb’s problem and using 1 and 4 quarters in place of the $10k and $1M. I have shown them to people at Less Wrong meetups, and also to various friends of mine, a total of about 20 people.
Almost everyone I’ve tried it on has one-boxed. Even though I left out the part in the description about being a really accurate predictor, and pre-seeded the boxes before I even knew who would be the one choosing. Maybe it would be different with $10k instead of $0.25. Maybe my friends are unusual and a different demographic would two-box. Maybe it’s due to a quirk of how I present them. But unless someone presents contrary evidence, I have to conclude that most people are one-boxers.
What?!? You offer people two boxes with essentially random amounts of money in them, and they choose to take one of the boxes instead of both? And these people are otherwise completely sane?
Could you maybe give us details of how exactly you present the problem? I can’t imagine any presentation that would make anyone even slightly tempted to one-box this variant. (Maybe if I knew I’d get to play again one day...)
That seems bizarre to me too. But if Jimrandomh is filling his boxes on the basis of what most people would do, and most people do one-box, then perhaps they are just behaving as rational, highly correlated, timeless decisionmakers.
A signalling explanation might explain this behavior: people would rather be seen as having gotten the problem correct, or signal non-greediness, than get an extra $0.25. As evidence for this conclusion, some people turn down the $1.00 in box one.
No one’s given the real correct solution, which is “inspect the boxes more thoroughly”. One of them has an extra label on the bottom, offering an extra $1.00 for finding it if you haven’t opened any boxes yet, which I’ve never had to pay out on. The moral is supposed to be that theory is hard to transfer into the real world and to question assumptions.
You let people inspect the boxes? Wouldn’t they be distinguishable by weight?
Weird. I two-box on that variant.
Reminds me of a story, set in a lazy Mark Twain river town. Two friends walking down the street. First says to second, “See that kid? He is really stupid.” Second asks, “Why do you say that?” First answers, “Watch”. Approaches kid. Holds out nickel in one hand and dime in the other. Asks kid which he prefers. “I’ll take the nickel. It’s bigger”. Man hands nickel to kid with smirk, and the two friends continue on.
Later the second man comes back and attempts to instruct the kid. “A dime is worth twice the value, that is it buys more candy”, says he, “even though the nickel looks bigger.” The kid gives the man a pitying look. “Ok, if you say so. But I’ve made seven nickels so far this month. How many dimes have you made?”
Which brings me to my real point—empirical research, I’m sure you have seen it, in which player 1 is asked to specify a split of $10 between himself and player 2. Player 2 then chooses to accept or reject. If he rejects, neither player gets anything. As I recall, when greedy player 1 specifies more than about 70% for himself, player 2 frequently rejects even though he is costing himself money. This can only be understood in classical “rational agent” game theory by postulating that player 2 does not believe researcher claims that the game is a one-shot.
What is the point? Well, perhaps people who have read about Newcomb problems are assuming (like most people in the research) that, somehow or other, greed will be punished.
Punishing unfair behavior even when it costs to do so is called altruistic punishment, and this particular experiment is called the Ultimatum Game.
Is it plausible that evolution would gradually push those 70% down to 30% or even lower, given enough time? There may not yet have been enough time for a strong enough group selection in evolution to create such an effect, but sooner or later it should happen, shouldn’t it? I’m thinking a species with such a great degree of selflessness would be more likely to survive than the present humanity is, because a larger percentage of them would cooperate about existential risk reduction than is the case in present humanity. Yet, 10-30% is still not 0%, so even with 10% there would still be enough of selfishness to make sure they wouldn’t end up refusing each other’s gifts until they all starve to death or something.
Can group selection of genes for different psychological constitution in humans already explain why player 1 takes only 70% and not, say, at least 90%, on average, in the game you describe?
What do chimps do? Does a chimp player 1 take more or less than 70%?
First of all, from the standpoint of the good of the group, I see no reason why player1 shouldn’t keep 100% of the money. After all, it is not as if player 2 were starving, and surely the good of player 1 is just as important to the good of the group as is the good of player 2. There is almost no reason for sharing from a standpoint of either Bentham-style utilitarianism or good-of-the-group.
However, there is a reason for sharing when you realize that player 2 is quite reasonably selfish, and has the power to make your life miserable. So, go ahead and give the jerk what he asks for. It is certainly to your own selfish advantage to do so. As long as he doesn’t get too greedy.
I’d like to see this done with a really good mentalist.
If I met someone in real life who was doing this trick (at least before I started spilling my opinions to the universe through my comments to this blog), I would strongly suspect that you were doing exactly this. And then I would definitely pick both boxes. (Well, first I’d try to figure out if you’re likely to offer me any more games, and I’d pick two boxes if I was fairly confident that you would not.) And I would get all of the money, since you would have predicted that I would pick only one box (assuming that you really seed them based on your honest best prediction).
On the other hand, if the situation is not presented as a game (even when I still don’t expect any iteration), I pretty consistently cooperate on all of the standard examples (prisoner’s dilemma, etc). But since feeling like a moral and cooperative person (except when playing games, of course) has high utility for me, I’m not really playing prisoner’s dilemma (etc) after all, so never mind.
This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb’s problem, then a predictor that always predicts one boxing could be suprisingly accurate.