On dividing the pie, I ran across this in an introduction to game theory class. I think the instructor wanted us to figure out that there’s a regress and see how we dealt with it. Different groups did different things, but two members of my group wanted to be nice and not cut anyone out, so our collective behavior was not particularly rational. “It’s not about being nice! It’s about getting the points!” I kept saying, but at the time the group was about 16 (and so was I), and had varying math backgrounds, and some were less interested in that aspect of the game.
I think at least one group realized there would always be a way to undermine the coalitions that assembled, and cut everyone in equally.
It’s not group selection: if group A splits things evenly and moves on, while group B goes around and around with fractious coalitions until a tiger comes along and eats them, then being in group A confers an individual advantage.
Clearly evolution also gave us the ability to make elaborate justifications as to why we, particularly, deserve more than an equal share. But that hardly disallows the fairness heuristic as a fallback option when the discussion is taking longer than it deserves. (And some people just have the stamina to keep arguing until everyone else has given up in disgust. These usually become middle managers or Congressmen.)
What you just described is group selection, and thus highly unlikely.
It’s to your individual benefit to be more (unconsciously) selfish and calculating in these situations, whether the other people in your group have a fairness drive or not.
It’s to your individual benefit to be more (unconsciously) selfish and calculating in these situations, whether the other people in your group have a fairness drive or not.
Not if you are punished for selfishness. I’m not sure how reasonable the following analysis it (since I didn’t study this kind of thing at all); it suggests that fairness is a stable strategy, and given some constraints a more feasible one than selfishness:
M. A. Nowak, et al. (2000). `Fairness versus reason in the ultimatum game.’. Science 289(5485):1773-1775. (PDF)
...and if your companions have circuitry for detecting and punishing selfish behaviour—what then? That’s how the “fairness drive” is implemented—get mad and punish cheaters until it hurts. That way, cheaters learn that crime doesn’t pay—and act fairly.
I agree. But you see how this individual selection pressure towards fairness is different from the group selection pressure that dclayh was actually asserting?
You’re introducing weaker and less plausible factors to rescue a mistaken assertion. It’s not worth it.
As pointed out below in this thread, the fairness drive almost certainly comes from the individual pressure of cheaters being punished, not from any group pressure as you tried to say above.
On dividing the pie, I ran across this in an introduction to game theory class. I think the instructor wanted us to figure out that there’s a regress and see how we dealt with it. Different groups did different things, but two members of my group wanted to be nice and not cut anyone out, so our collective behavior was not particularly rational. “It’s not about being nice! It’s about getting the points!” I kept saying, but at the time the group was about 16 (and so was I), and had varying math backgrounds, and some were less interested in that aspect of the game.
I think at least one group realized there would always be a way to undermine the coalitions that assembled, and cut everyone in equally.
One might guess that evolution granted us a strong fairness drive to avoid just these sorts of decision regresses.
Fail.
It’s not group selection: if group A splits things evenly and moves on, while group B goes around and around with fractious coalitions until a tiger comes along and eats them, then being in group A confers an individual advantage.
Clearly evolution also gave us the ability to make elaborate justifications as to why we, particularly, deserve more than an equal share. But that hardly disallows the fairness heuristic as a fallback option when the discussion is taking longer than it deserves. (And some people just have the stamina to keep arguing until everyone else has given up in disgust. These usually become middle managers or Congressmen.)
What you just described is group selection, and thus highly unlikely.
It’s to your individual benefit to be more (unconsciously) selfish and calculating in these situations, whether the other people in your group have a fairness drive or not.
Not if you are punished for selfishness. I’m not sure how reasonable the following analysis it (since I didn’t study this kind of thing at all); it suggests that fairness is a stable strategy, and given some constraints a more feasible one than selfishness:
M. A. Nowak, et al. (2000). `Fairness versus reason in the ultimatum game.’. Science 289(5485):1773-1775. (PDF)
See reply to Tim Tyler.
...and if your companions have circuitry for detecting and punishing selfish behaviour—what then? That’s how the “fairness drive” is implemented—get mad and punish cheaters until it hurts. That way, cheaters learn that crime doesn’t pay—and act fairly.
I agree. But you see how this individual selection pressure towards fairness is different from the group selection pressure that dclayh was actually asserting?
You and EY seem to be the people who are talking about group selection.
Not when the cost (including opportunity cost) of doing the calculating outweighs the benefit it would give you.
You’re introducing weaker and less plausible factors to rescue a mistaken assertion. It’s not worth it.
As pointed out below in this thread, the fairness drive almost certainly comes from the individual pressure of cheaters being punished, not from any group pressure as you tried to say above.
Statement of the obvious: Spending excessive time deciding is neither rational nor evolutionarily favored.