Interlude for Behavioral Economics
The so-called “rational” solutions to the Prisoners’ Dilemma and Ultimatum Game are suboptimal to say the least. Humans have various kludges added by both nature or nurture to do better, but they’re not perfect and they’re certainly not simple. They leave entirely open the question of what real people will actually do in these situations, a question which can only be addressed by hard data.
As in so many other areas, our most important information comes from reality television. The Art of Strategy discusses a US game show “Friend or Foe” where a team of two contestants earned money by answering trivia questions. At the end of the show, the team used a sort-of Prisoner’s Dilemma to split their winnings: each team member chose “Friend” (cooperate) or “Foe” (defect). If one player cooperated and the other defected, the defector kept 100% of the pot. If both cooperated, each kept 50%. And if both defected, neither kept anything (this is a significant difference from the standard dilemma, where a player is a little better off defecting than cooperating if her opponent defects).
Players chose “Friend” about 45% of the time. Significantly, this number remained constant despite the size of the pot: they were no more likely to cooperate when splitting small amounts of money than large.
Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner’s Dilemma. This played out on the show in a surprising way: players’ choices started off random, but as the show went on and contestants began participating who had seen previous episodes, they began to base their decision on observable characteristics about their opponents. For example, in the first season women cooperated more often than men, so by the second season a player was cooperating more often if their opponent was a woman—whether or not that player was a man or woman themselves.
Among the superficial characteristics used, the only one to reach statistical significance according to the study was age: players below the median age of 27 played “Foe” more often than those over it (65% vs. 39%, p < .001). Other nonsignificant tendencies were for men to defect more than women (53% vs. 46%, p=.34) and for black people to defect more than white people (58% vs. 48%, p=.33). These nonsignificant tendencies became important because the players themselves attributed significance to them: for example, by the second season women were playing “Foe” 60% of the time against men but only 45% of the time against women (p<.01) presumably because women were perceived to be more likely to play “Friend” back; also during the second season, white people would play “Foe” 75% against black people, but only 54% of the time against other white people.
(This risks self-fulfilling prophecies. If I am a black man playing a white woman, I expect she will expect me to play “Foe” against her, and she will “reciprocate” by playing “Foe” herself. Therefore, I may choose to “reciprocate” against her by playing “Foe” myself, even if I wasn’t originally intending to do so, and other white women might observe this, thus creating a vicious cycle.)
In any case, these attempts at coordinated play worked, but only imperfectly. By the second season, 57% of pairs chose the same option—either (C, C) or (D, D).
Art of Strategy included another great Prisoner’s Dilemma experiment. In this one, the experimenters spoiled the game: they told both players that they would be deciding simultaneously, but in fact, they let Player 1 decide first, and then secretly approached Player 2 and told her Player 1′s decision, letting Player 2 consider this information when making her own choice.
Why should this be interesting? From the previous data, we know that humans play “tit-for-expected-tat”: they will generally cooperate if they believe their opponent will cooperate too. We can come up with two hypotheses to explain this behavior. First, this could be a folk version of Timeless Decision Theory or Hofstadter’s superrationality; a belief that their own decision literally determines their opponent’s decision. Second, it could be based on a belief in fairness: if I think my opponent cooperated, it’s only decent that I do the same.
The “researchers spoil the setup” experiment can distinguish between these two hypotheses. If people believe their choice determines that of their opponent, then once they know their opponent’s choice they no longer have to worry and can freely defect to maximize their own winnings. But if people want to cooperate to reward their opponent, then learning that their opponent cooperated for sure should only increase their willingness to reciprocate.
The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate. When the same researchers in the same lab didn’t tell the second player anything, 37% cooperated.
This is a pretty resounding victory for the “folk version of superrationality” hypothesis. 21% of people wouldn’t cooperate if they heard their opponent defected, wouldn’t cooperate if they heard their opponent cooperated, but will cooperate if they don’t know which of those two their opponent played.
Moving on to the Ultimatum Game: very broadly, the first player usually offers between 30 and 50 percent, and the second player tends to accept. If the first player offers less than about 20 percent, the second player tends to reject it.
Like the Prisoner’s Dilemma, the amount of money at stake doesn’t seem to matter. This is really surprising! Imagine you played an Ultimatum Game for a billion dollars. The first player proposes $990 million for herself, $10 million for you. On the one hand, this is a 99-1 split, just as unfair as $99 versus $1. On the other hand, ten million dollars!
Although tycoons have yet to donate a billion dollars to use for Ultimatum Game experiments, researchers have done the next best thing and flown out to Third World countries where even $100 can be an impressive amount of money. In games in Indonesia played for a pot containing a sixth of Indonesians’ average yearly income, Indonesians still rejected unfair offers. In fact, at these levels the first player tended to propose fairer deals than at lower stakes—maybe because it would be a disaster if her offer get rejected.
It was originally believed that results in the Ultimatum Game were mostly independent of culture. Groups in the US, Israel, Japan, Eastern Europe, and Indonesia all got more or less the same results. But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru. They tend to make offers around 25%, and will accept pretty much anything.
One more interesting finding: people who accept low offers in the Ultimatum Game have lower testosterone than those who reject them.
There is a certain degenerate form of the Ultimatum Game called the Dictator Game. In the Dictator Game, the second player doesn’t have the option of vetoing the first player’s distribution. In fact, the second player doesn’t do anything at all; the first player distributes the money, both players receive the amount of money the first player decided upon, and the game ends. A perfectly selfish first player would take 100% of the money in the Dictator Game, leaving the second player with nothing.
In a metaanalysis of 129 papers consisting of over 41,000 individual games, the average amount the first player gave the second player was 28.35%. 36% of first players take everything, 17% divide the pot equally, and 5% give everything to the second player, nearly doubling our previous estimate of what percent of people are Jesus.
The meta-analysis checks many different results, most of which are insignificant, but a few stand out. Subjects playing the dictator game “against” a charity are much more generous; up to a quarter give everything. When the experimenter promises to “match” each dollar given away (eg the dictator gets $100, but if she gives it to the second player the second player gets $200), the dictator gives much more (somewhat surprising, as this might be an excuse to keep $66 for yourself and get away with it by claiming that both players still got equal money). On the other hand, if the experimenters give the second player a free $100, so that they start off richer than the dictator, the dictator compensates by not giving them nearly as much money.
Old people give more than young people, and non-students give more than students. People from “primitive” societies give more than people from more developed societies, and the more primitive the society, the stronger the effect. The most important factor, though? As always, sex. Women both give more and get more in dictator games.
It is somewhat inspiring that so many people give so much in this game, but before we become too excited about the fundamental goodness of humanity, Art of Strategy mentions a great experiment by Dana, Cain, and Dawes. The subjects were offered a choice: either play the Dictator Game with a second player for $10, or get $9 and the second subject is sent home and never even knows what the experiment is about. A third of participants took the second option.
So generosity in the Dictator Game isn’t always about wanting to help other people. It seems to be about knowing, deep down, that some anonymous person who probably doesn’t even know your name and who will never see you again is disappointed in you. Remove the little problem of the other person knowing what you did, and they will not only keep the money, but even be willing to pay the experiment a dollar to keep them quiet.
- The Library of Scott Alexandria by 14 Sep 2015 1:38 UTC; 126 points) (
- Introduction to Game Theory: Sequence Guide by 28 Jun 2012 3:32 UTC; 84 points) (
- Bargaining and Auctions by 15 Jul 2012 17:01 UTC; 54 points) (
- 27 May 2015 1:20 UTC; 0 points) 's comment on Real World Solutions to Prisoners’ Dilemmas by (
A very interesting episode of a similar game show, Golden Balls (where “Split” = “Friend”, “Steal” = “Foe”).
As Bruce Schneier comments:
Wow. I also will not give anything away, but I agree that this is an insane round of this game. There are two agents with very different modeling processes trying to achieve the best outcome for themselves, but (I don’t know if this applies only to me or to others), unlike a normal PD, we are not a participant so we don’t know the processes of any of the agents, which makes it very enjoyable. This round is a testament to something, that is for sure.
Steve Landsburg also blogged about this show (video clip included).
Regarding the Dana, Cain and Dawes experiment, the abstract says: “Over two studies, we found that about one third of participants were willing to exit a $10 dictator game and take $9 instead. ” One third is less than “the majority of participants” stated by you. A fisherman’s tale?
Good catch. I got the numbers out of Art of Strategy, then searched for the study online. Either it’s a slightly different study than the one cited in the book with slightly different results, or I’m transmitting an error from there.
Steve Landsburg has an interesting point about versions of the Dictator Game (and several other similar games) in which people have the option to “destroy” some or all of the money if they don’t like the offer. In particular, he recently commented on the so-called “Destructor Game” ( pdf of paper here).
In this game, participants were given the option of deciding whether or not to take away some of the money that the experimenter had given to some of the other participants. When they chose to do so, the experimenters concluded that they were indulging a taste for destruction. As Landsburg sensibly points out, nothing was actually destroyed in any of these experiments. Money was simply transferred from an anonymous subject back to the experimenter.
The reason I bring this up here is because it seems like as good as place as any to get an answer to the next question—has anyone actually ever done such an experiment in which the goods to be destroyed were actually destroyed. (You can imagine giving everyone candy bars, and giving the participants the option to take someone else’s candy bars and throw them into some stinking garbage heap). My instinct is that Landsburg is right, and people would be less likely to engage in destructive behaviour if they were destroying actual goods instead of just paper money, but I would be interested to see if this has ever actually been studied. Does anyone know?
I’ve never heard of a “destroy the money” experiment, but the fact that most economists before Landsburg didn’t think of this, and I didn’t think of it, and my sources didn’t think of it, makes me skeptical that the average participant in the Dictator Game is thinking of it.
I’m also reminded of stories about medical malpractice lawsuits, where juries will sometimes award really big sums of money even when they’re not sure whether the doctor was guilty, on the grounds that hospitals/clinics/insurances are large faceless institutions and probably have so much money they won’t miss a little. I would expect players to treat the researcher (presumably working off grant money from a big research university) the same way juries treat hospitals.
On the theme of point destruction, there’s a reasonably big literature on a variation where destruction of rewards can be undertaken by players to reduce the rewards of other players, with variations where they control who can see what cooperative acts and vary the sizes of the group. Dunbar’s number sometimes makes an appearance. I imagine you’ve heard of this, and if you haven’t hopefully this comment will add it to your arsenal of game theory. It would be awesome if it made an appearance later in your sequence :-)
The general hand-wavy upshot is that for humans (assuming you’re in a situation where large scale cooperation and positive externalities are actually possible and valuable to you) the best situation is to be in a large-ish group where people can at least see defectors after the act of defecting, and can also see other people’s punishment behavior, and can punish both outright defectors and also “punish non-punishers”. So far as I’m aware, you don’t need recourse to more recursion than that. You don’t have to get totally silly with punishing of non-punishers of non-punishers of defectors. There are elements of the literature here, here, and here, if anyone wants entry points. The first is most accessible :-)
Is there really anything exceptional in the 3% figure? 3% of people facing a player who chose “Foe” preferred to transfer money from the game show owners to that player. 97% preferred the game show owners to keep the money. If anything, 3% is below what I would have expected. More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.
The participants don’t know the rules, and have been given a hint that they don’t know the rules—as the host said that the choices will be independent/hidden, but then is telling you the other contestant’s choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.
This is a good catch, and criticism of the “deliberately spoil the experiment” design.
A better design would be to put the contestants in adjacent rooms, but to allow the second contestant to “accidentally” overhear the first (e.g. speaking loudly, through thin walls). Then the experimenter enters the second contestant’s room and asks them whether they want to co-operate or defect.
My guess is those people were willing to pay to reward the other player for cooperating. (That is, they gain psychic value from the other person’s gain, and knowing it was the result of their actions.)
I think you can apply TDT of sorts: if I was in the other person’s position, I would want them to cooperate. Coupled with the fact that the roles were selected randomly, you could essentially make a precommitment: if another person and I are in this situation, I’ll cooperate no matter what. I think that doesn’t change your expected value, but it does reduce variance.
BTW, lots of LWers said they’d give money to Omega in the Counterfactual mugging.
I’d be thinking that I’d like to do the honorable/right thing. There exist non-monetary costs in defecting; those include a sense of guilt. That’s the difference to a True Prisoner’s Dilemma, where you actually prefer defecting if you know the other person cooperated.
That last “if you know the other person cooperated” is unnecessary, in a True Prisoner’s Dilemma each player prefers defecting in any circumstance.
Not quite: e.g. If you’re playing True Prisoner’s Dilemma against a copy of yourself, you prefer cooperating, because you know your choice and your copy’s choice will be identical, but you don’t know what the choice will be before you actually make it.
If you don’t know for sure that they’ll be identical, but there’s some other logical connection that will e.g. make it 99% certain they’ll be identical. (e.g. your copies were not created at that particular moment, but a month ago, and were allowed to read different random books in the meantime), then one would argue you’re still better off preferring cooperation.
Given the context, I was assuming the scenario being discussed was one where the two players’ decisions are independent, and where no one expects they may be playing against themselves.
You’re right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent’s choice.
If you were playing against yourself, would you co-operate?
I wonder what people do in this Ultimatum Game “variant”:
Player A and B have a contest of some sort (for example, they might run a race, or play a game of checkers, or whatever), and the winner of the contest gets to be the one who makes the proposal in the Ultimatum Game.
The game theory is the same, the social context is quite different...
I managed to find this. There is a noticeable tendency for proposers to keep more of the money if they have earned it. This is especially pronounced in the Dictator Game, but also exists in the Ultimatum Game.
Although I can’t recall where I got it from, and Google is failing me, I’m pretty sure there’s a body of experimental evidence along these lines, showing that the second player is overwhelmingly more likely to accept an unfair split if the roles are designated in a way you describe.
I don’t have time to find an example right now, but I have some experience in this field and just want to affirm sixes_and_sevens’ assertion.
Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?
He also says:
I’m guessing both are a joke.
Yeah, I also took it as a joke.
I took the “like so many other things” to only apply to “was ruined”, not to “was ruined by the Machiguenga”...
I think he means that many elegant, simple hypothesis have obscure counterexamples, not that the Machiguenga Indians are typically one of those counterexamples.
Is there not already a past sequence/post dealing with the creation of such ambiguities when there are multiple plausible implicit statements inferable from an inexact syntactical construction? I thought I saw something along those lines somewhere yesterday, but I can’t seem to find it by just retracing my steps.
I genuinely can’t tell if this is intentional.
These two could both be explained by rich people giving more than poor people. Is that the case?
We hardly need this experiment to know that people don’t tend to arbitrarily give each other money for no particular reason—when was the last time you received an anonymous envelope full of cash in the mail?
I know several people who went through phases of leaving little “prizes” sprinkled around the world in the hopes that random strangers would discover them, collect the prizes, and think better of the world for this reason. I have never personally received anonymous cash in the mail, but it wouldn’t entirely surprise me if it happened some day.
It seems more likely if people have some way of getting your mailing address without directly asking for it, but I can understand that this would quite possibly have negative consequences too >.>
Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.
It is reputational systems which reward correct prediction (co-operate if and only if you predict that the other player will co-operate this time). That is because the reputational damage from defecting against a co-operator is large : the co-operator gains sympathy; the defector risks punishment or reduced co-operation from other observers. Whereas if a person who is generally known to co-operate defects against another defector, there is generally not a reputational hit (indeed there is probably a slight uplift to reputation for predicting correctly and not letting the defector get away with it).
Super-rational players co-operate if and only if the other player is super-rational. If this was the strategy that humans in fact followed (i.e. there were ways in which super-rational players could reliably recognize each other) then co-operation would be pretty near universal among humans in PDs. But it isn’t.
The empirical evidence (from this show, and other studies) is that humans play a reputational strategy rather than pure Tit-for-Tat or super-rational strategy. It appears to be what humans do, and there is a fairly convincing case it is what we’re adapted to do.
EDIT: The other evidence you quote in your article is very interesting though:
That suggests a mixture between reputational and super-rational strategies with a bit of “pure co-operate” thrown in as well. If there were a pure super-rational strategy then no-one would co-operate after hearing for sure that the other player had already co-operated. (This is unless they both knew for sure going into the game that the other player was super-rational; then they could both commit to co-operate regardless; it is equivalent in that case to counterfactual mugging, or to Newcomb with transparent boxes). Whereas if there were a pure reputational strategy, then knowing that the other player had co-operated would increase the probability of co-operating, not reduce it. Interesting.
I’m wondering if there are any game-theory models which predict a mixed equilibrium between super-rational and reputation, and whether the equilibrium allows a small % of “pure co-operators” into the mix as well?
Pure co-operate can be a reasonable strategy, even with foreknowledge of the opponent’s defection in this round, if you think your opponent is playing something close to tit-for-tat and expect to play many more rounds with them.
Agree again. Yvain is misusing terms and misrepresenting evolutionary strategies. This sequence is vastly overrated.
Does this mean that a significant fraction players actually prefer the (C, C) outcome to the (D, C) outcome? What would happen if you pretended the game was PD but if there was a (D, C) result you offered the defector a (secret) chance to change their move to C? Would a lot of them accept that offer?
Actually, I’m not sure whether the extra move needs to be secret or whether it can be announced in the original rules.
I may have to test that variant. I occasionally work Prisoner’s Dilemma style situations in to my games, as it’s a very easy way to learn about the players :)
So people on “Friend or Foe” turned into CliqueBots?;)
For those not catching the reference: CliqueBots
Kind of, with some of the cliques being self-destructive.
This looks backwards.
Thanks, fixed.
All this seems to suggest that in competitive game people aim to get as much money as possible in every decision?
The more “primitive” people just don’t know the value of money.
Its like giving candy to someone who has very little utility for it.
Cooperation suggests merely that some people might have more built up tolerance for loss.
It does not seem to indicate any lack of greed.
“5% give everything to the second player, nearly doubling our previous estimate of what percent of people are Jesus”
I wonder how much “windfall” or similar circumstances around the money change how one responds. In my recent history I’ve had two windfall gains, one an inheritance and one the money that was being handed out as the “everyone gets a check” part of covid relief. In both cases I was happy to give the money to family who needed it more than me.
I raise this because I don’t think of myself as Jesus (not even by Scott’s fairly undemanding tithing / rational altruism standards). I think the dispositive thing was really that this was windfall money; I don’t think of “normal revenue” money in the same way. Would I consider money won while playing one of these games as windfall or as earned? I suspect it might be very fragile to the precise framing of the experiment...
I don’t think it’s surprising—the modified version of the game increases the amount of fuzziness that each dollar buys, but doesn’t increase the pain associated with spending that dollar. So the player will spend more dollars before the pain overtakes the fuzzy.
Thanks for the great write-up.
I don’t think the “spoil the setup” experiment distinguishes TDT from the belief in fairness. Just because the second person’s decision comes after the first doesn’t mean it has no effect on the first. It’s very much like Newcomb’s problem in that regard, and one of the main points of TDT was to account for that effect. Depending on the details of the rewards and how strongly you think the other player’s decisions correlate with your own, it may make sense to precommit to cooperation even if you’re told the other person’s choice. And if it makes sense to precommit to cooperation, that’s what TDT will do (unless I’m missing something).
Do you have an alternate explanation for why so many fewer people cooperated in the “spoil the setup” experiment than in ordinary experiments?
The superrationality explanation still makes sense. If the other player’s choice is known, then symmetry is broken, so the superrational agent should defect.
Other than that, I’m not really sure what you mean by “explanation”. The “folk version of superrationality” sounds plausible, but the underlying causes of the experimental results still feel pretty mysterious. Demystifying them is well beyond my capability, but it’s certainly an interesting question.
As with the previous entries in the sequence, I like the article but strongly suggest that you add links between sequence entries.
Thanks. I’ll do that when I’m done with the whole thing, so that I don’t have to keep going back and adding new “Next In Sequence” posts when I post new articles.
That works.
I am not sure what you intended to say here, but the word “primitive” definitely looks like a red flag. As I don’t think I am the only one to believe this, I would ask you to please change the wording or delete this sentence.