The thing is, I could just as easily be one of the ten as the eleventh (actually, ten times as easily), so it’s in my interests to support a norm where the eleventh sacrifices for the good of the ten. I am in very little danger of starving to death in Africa.
Which is just the opposite of what you’d expect—If I recall correctly, students who took game-theory-oriented economics classes became less altruistic, not more.
Possibly not the case—the studies you’re probably thinking of used charities that did things like lobby for lower tuition and so on—exactly the sort of things you’d expect altruistic economists to oppose.
You also have to deceive them into believing that you, personally, won’t defect. For humans, who almost never really face one-off decision problems, your strategy isn’t supposed to work both because the other people shouldn’t cooperate for high stakes without having a way of getting the strong knowledge that the opponent will cooperate given that they cooperate (some kind of publicly announced externally controlled commitment), and because you have too few shots at defecting before you get bad reputation.
The thing is, I could just as easily be one of the ten as the eleventh (actually, ten times as easily), so it’s in my interests to support a norm where the eleventh sacrifices for the good of the ten. I am in very little danger of starving to death in Africa.
It’s not pleasant, but it is true.
Teach everyone else to cooperate then defect
Congratulations, you’ve written the most horrifying sentence I’ve read all day.
Tricking the other player is never justified? Did I miss something?
This site is supposed to be about rationality, but it’s covertly about altruism.
Not that covert, really.
Which is just the opposite of what you’d expect—If I recall correctly, students who took game-theory-oriented economics classes became less altruistic, not more.
Possibly not the case—the studies you’re probably thinking of used charities that did things like lobby for lower tuition and so on—exactly the sort of things you’d expect altruistic economists to oppose.
See for example Steven Landsburg on the subject
You also have to deceive them into believing that you, personally, won’t defect. For humans, who almost never really face one-off decision problems, your strategy isn’t supposed to work both because the other people shouldn’t cooperate for high stakes without having a way of getting the strong knowledge that the opponent will cooperate given that they cooperate (some kind of publicly announced externally controlled commitment), and because you have too few shots at defecting before you get bad reputation.