I teach an undergraduate game theory course at Smith College. Many students start by thinking that rational people should cooperate in the prisoners’ dilemma. I think part of the value of game theory is in explaining why rational people would not cooperate, even knowing that everyone not cooperating makes them worse off. If you redefine rationality such that you should cooperate in the prisoners’ dilemma, I think you have removed much of the illuminating value of game theory. Here is a question I will be asking my game theory students on the first class:
Our city is at war with a rival city, with devastating consequences awaiting the loser. Just before our warriors leave for the decisive battle, the demon Moloch appears and says “sacrifice ten healthy, loved children and I will give +7 killing power (which is a lot) to your city’s troops and subtract 7 from the killing power of your enemy. And since I’m an honest demon, know that right now I am offering this same deal to your enemy.” Should our city accept Moloch’s offer?
I believe under your definition of rationality this Moloch example loses its power to, for example, in part explain the causes of WW I.
I am defining rationality as the ability to make good decisions that get the agent what it wants. In other words, maximizing utility. Under that definition, the rational choice is to cooperate, as the article explains. You can certainly define rationality in some other way like “follows this elegant mathematical theory I’m partial to”, but when that mathematical theory leads to bad outcomes in the real world, it seems disingenuous to call that “rationality”, and I’d recommend you pick a different word for it.
As for your city example, I think you’re failing to consider the relevance of common knowledge. It’s only rational to cooperate if you’re confident that the other player is also rational and knows the same things about you. In many real-world situations that is not the case, and the decision of whether to cooperate or defect will be based on the exact correlation you think your decisions have with the other party; if that number is low, then defecting is the correct choice. But if both cities are confident enough that the other follows the same decision process; say, they have the exact same political parties and structure, and all the politicians are very similar to each other; then refusing the demon’s offer is correct, since it saves the lives of 20 children.
I’ll admit to being a little confused by your comment, since I feel like I already explained these things pretty explicitly in the article? I’d like to figure out where the miscommunication is/was occurring so I can address it better.
I think the disagreement is that I think the traditional approach to the prisoners’ dilemma makes it more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners’ dilemma because I would define the prisoners’ dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both parties are better off if neither play this dominant strategy than if both do, and I would define a dominant strategy as something a rational player always plays regardless of what he things the other player would do. I realize I am kind of cheating by trying to win through definitions.
Yeah, I think that sort of presentation is anti-useful for understanding the world, since it’s picking a rather arbitrary mathematical theory and just insisting “this is what rational people do”, without getting people to think it through and understand why or if that’s actually true.
The reason a rational agent will likely defect in a realistic prisoner’s dilemma against a normal human is because it believes the human’s actions to be largely uncorrelated with its own, since it doesn’t have a good enough model of the human’s mind to know how it thinks. (And the reason why humans defect is the same, with the added obstacle that the human isn’t even rational themselves.)
Teaching that rational agents defect because that’s the Nash equilibrium and rational agents always go to the Nash equilibrium is just an incorrect model of rationality, and agents that are actually rational can consistently win against Nash-seekers.
Do you believe that this Moloch example partly explains the causes of WW1? If so, how?
I think it can reasonably part-explain the military build-up before the war, where nations spent more money on defense (and so less on children’s healthcare).
But then you don’t need the demon Moloch to explain the game theory of military build-up. Drop the demon. It’s cleaner.
Cleaner, but less interesting plus I have a entire Demon Games exercise we do on the first day of class. Yes the defense build up, but also everyone going to war even though everyone (with the exception of the Austro-Hungarians) thinking they are worse off going to war than having the peace as previously existed, but recognizing that if they don’t prepare for war, they will be worse off. Basically, if the Russians don’t mobilize they will be seen to have abandoned the Serbs, but if they do mobilize and then the Germans don’t quickly move to attack France through Belgium then Russia and France will have the opportunity (which they would probably take) to crush Germany.
I certainly see how game theory part-explains the decisions to mobilize, and how those decisions part-caused WW2. So far as the Moloch example illustrates parts of game theory, I see the value. I was expecting something more.
In particular, Russia’s decision to mobilize doesn’t fit into the pattern of a one shot Prisoner’s Dilemma. The argument is that Russia had to mobilize in order for its support for Serbia to be taken seriously. But at this point Austria-Hungary has already implicitly threatened Serbia with war, which means it has already failed to have its support taken seriously. We need more complicated game theory to explain this decision.
I don’t think Austria-Hungry was in a prisoners’ dilemma as they wanted a war so long as they would have German support. I think the Prisoners’ dilemma (imperfectly) comes into play for Germany, Russia, and then France given that Germany felt it needed to have Austria-Hungry as a long-term ally or risk getting crushed by France + Russia in some future war.
I teach an undergraduate game theory course at Smith College. Many students start by thinking that rational people should cooperate in the prisoners’ dilemma. I think part of the value of game theory is in explaining why rational people would not cooperate, even knowing that everyone not cooperating makes them worse off. If you redefine rationality such that you should cooperate in the prisoners’ dilemma, I think you have removed much of the illuminating value of game theory. Here is a question I will be asking my game theory students on the first class:
Our city is at war with a rival city, with devastating consequences awaiting the loser. Just before our warriors leave for the decisive battle, the demon Moloch appears and says “sacrifice ten healthy, loved children and I will give +7 killing power (which is a lot) to your city’s troops and subtract 7 from the killing power of your enemy. And since I’m an honest demon, know that right now I am offering this same deal to your enemy.” Should our city accept Moloch’s offer?
I believe under your definition of rationality this Moloch example loses its power to, for example, in part explain the causes of WW I.
I am defining rationality as the ability to make good decisions that get the agent what it wants. In other words, maximizing utility. Under that definition, the rational choice is to cooperate, as the article explains. You can certainly define rationality in some other way like “follows this elegant mathematical theory I’m partial to”, but when that mathematical theory leads to bad outcomes in the real world, it seems disingenuous to call that “rationality”, and I’d recommend you pick a different word for it.
As for your city example, I think you’re failing to consider the relevance of common knowledge. It’s only rational to cooperate if you’re confident that the other player is also rational and knows the same things about you. In many real-world situations that is not the case, and the decision of whether to cooperate or defect will be based on the exact correlation you think your decisions have with the other party; if that number is low, then defecting is the correct choice. But if both cities are confident enough that the other follows the same decision process; say, they have the exact same political parties and structure, and all the politicians are very similar to each other; then refusing the demon’s offer is correct, since it saves the lives of 20 children.
I’ll admit to being a little confused by your comment, since I feel like I already explained these things pretty explicitly in the article? I’d like to figure out where the miscommunication is/was occurring so I can address it better.
I think the disagreement is that I think the traditional approach to the prisoners’ dilemma makes it more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners’ dilemma because I would define the prisoners’ dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both parties are better off if neither play this dominant strategy than if both do, and I would define a dominant strategy as something a rational player always plays regardless of what he things the other player would do. I realize I am kind of cheating by trying to win through definitions.
Yeah, I think that sort of presentation is anti-useful for understanding the world, since it’s picking a rather arbitrary mathematical theory and just insisting “this is what rational people do”, without getting people to think it through and understand why or if that’s actually true.
The reason a rational agent will likely defect in a realistic prisoner’s dilemma against a normal human is because it believes the human’s actions to be largely uncorrelated with its own, since it doesn’t have a good enough model of the human’s mind to know how it thinks. (And the reason why humans defect is the same, with the added obstacle that the human isn’t even rational themselves.)
Teaching that rational agents defect because that’s the Nash equilibrium and rational agents always go to the Nash equilibrium is just an incorrect model of rationality, and agents that are actually rational can consistently win against Nash-seekers.
Do you believe that this Moloch example partly explains the causes of WW1? If so, how?
I think it can reasonably part-explain the military build-up before the war, where nations spent more money on defense (and so less on children’s healthcare).
But then you don’t need the demon Moloch to explain the game theory of military build-up. Drop the demon. It’s cleaner.
Cleaner, but less interesting plus I have a entire Demon Games exercise we do on the first day of class. Yes the defense build up, but also everyone going to war even though everyone (with the exception of the Austro-Hungarians) thinking they are worse off going to war than having the peace as previously existed, but recognizing that if they don’t prepare for war, they will be worse off. Basically, if the Russians don’t mobilize they will be seen to have abandoned the Serbs, but if they do mobilize and then the Germans don’t quickly move to attack France through Belgium then Russia and France will have the opportunity (which they would probably take) to crush Germany.
I certainly see how game theory part-explains the decisions to mobilize, and how those decisions part-caused WW2. So far as the Moloch example illustrates parts of game theory, I see the value. I was expecting something more.
In particular, Russia’s decision to mobilize doesn’t fit into the pattern of a one shot Prisoner’s Dilemma. The argument is that Russia had to mobilize in order for its support for Serbia to be taken seriously. But at this point Austria-Hungary has already implicitly threatened Serbia with war, which means it has already failed to have its support taken seriously. We need more complicated game theory to explain this decision.
I don’t think Austria-Hungry was in a prisoners’ dilemma as they wanted a war so long as they would have German support. I think the Prisoners’ dilemma (imperfectly) comes into play for Germany, Russia, and then France given that Germany felt it needed to have Austria-Hungry as a long-term ally or risk getting crushed by France + Russia in some future war.