The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate.
Is there really anything exceptional in the 3% figure? 3% of people facing a player who chose “Foe” preferred to transfer money from the game show owners to that player. 97% preferred the game show owners to keep the money. If anything, 3% is below what I would have expected. More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.
The participants don’t know the rules, and have been given a hint that they don’t know the rules—as the host said that the choices will be independent/hidden, but then is telling you the other contestant’s choice.
So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.
This is a good catch, and criticism of the “deliberately spoil the experiment” design.
A better design would be to put the contestants in adjacent rooms, but to allow the second contestant to “accidentally” overhear the first (e.g. speaking loudly, through thin walls). Then the experimenter enters the second contestant’s room and asks them whether they want to co-operate or defect.
My guess is those people were willing to pay to reward the other player for cooperating. (That is, they gain psychic value from the other person’s gain, and knowing it was the result of their actions.)
I think you can apply TDT of sorts: if I was in the other person’s position, I would want them to cooperate. Coupled with the fact that the roles were selected randomly, you could essentially make a precommitment: if another person and I are in this situation, I’ll cooperate no matter what. I think that doesn’t change your expected value, but it does reduce variance.
More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.
I’d be thinking that I’d like to do the honorable/right thing. There exist non-monetary costs in defecting; those include a sense of guilt. That’s the difference to a True Prisoner’s Dilemma, where you actually prefer defecting if you know the other person cooperated.
That last “if you know the other person cooperated” is unnecessary, in a True Prisoner’s Dilemma each player prefers defecting in any circumstance.
Not quite: e.g. If you’re playing True Prisoner’s Dilemma against a copy of yourself, you prefer cooperating, because you know your choice and your copy’s choice will be identical, but you don’t know what the choice will be before you actually make it.
If you don’t know for sure that they’ll be identical, but there’s some other logical connection that will e.g. make it 99% certain they’ll be identical. (e.g. your copies were not created at that particular moment, but a month ago, and were allowed to read different random books in the meantime), then one would argue you’re still better off preferring cooperation.
Given the context, I was assuming the scenario being discussed was one where the two players’ decisions are independent, and where no one expects they may be playing against themselves.
You’re right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent’s choice.
Is there really anything exceptional in the 3% figure? 3% of people facing a player who chose “Foe” preferred to transfer money from the game show owners to that player. 97% preferred the game show owners to keep the money. If anything, 3% is below what I would have expected. More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.
The participants don’t know the rules, and have been given a hint that they don’t know the rules—as the host said that the choices will be independent/hidden, but then is telling you the other contestant’s choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.
This is a good catch, and criticism of the “deliberately spoil the experiment” design.
A better design would be to put the contestants in adjacent rooms, but to allow the second contestant to “accidentally” overhear the first (e.g. speaking loudly, through thin walls). Then the experimenter enters the second contestant’s room and asks them whether they want to co-operate or defect.
My guess is those people were willing to pay to reward the other player for cooperating. (That is, they gain psychic value from the other person’s gain, and knowing it was the result of their actions.)
I think you can apply TDT of sorts: if I was in the other person’s position, I would want them to cooperate. Coupled with the fact that the roles were selected randomly, you could essentially make a precommitment: if another person and I are in this situation, I’ll cooperate no matter what. I think that doesn’t change your expected value, but it does reduce variance.
BTW, lots of LWers said they’d give money to Omega in the Counterfactual mugging.
I’d be thinking that I’d like to do the honorable/right thing. There exist non-monetary costs in defecting; those include a sense of guilt. That’s the difference to a True Prisoner’s Dilemma, where you actually prefer defecting if you know the other person cooperated.
That last “if you know the other person cooperated” is unnecessary, in a True Prisoner’s Dilemma each player prefers defecting in any circumstance.
Not quite: e.g. If you’re playing True Prisoner’s Dilemma against a copy of yourself, you prefer cooperating, because you know your choice and your copy’s choice will be identical, but you don’t know what the choice will be before you actually make it.
If you don’t know for sure that they’ll be identical, but there’s some other logical connection that will e.g. make it 99% certain they’ll be identical. (e.g. your copies were not created at that particular moment, but a month ago, and were allowed to read different random books in the meantime), then one would argue you’re still better off preferring cooperation.
Given the context, I was assuming the scenario being discussed was one where the two players’ decisions are independent, and where no one expects they may be playing against themselves.
You’re right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent’s choice.
If you were playing against yourself, would you co-operate?