I am defining rationality as the ability to make good decisions that get the agent what it wants. In other words, maximizing utility. Under that definition, the rational choice is to cooperate, as the article explains. You can certainly define rationality in some other way like “follows this elegant mathematical theory I’m partial to”, but when that mathematical theory leads to bad outcomes in the real world, it seems disingenuous to call that “rationality”, and I’d recommend you pick a different word for it.
As for your city example, I think you’re failing to consider the relevance of common knowledge. It’s only rational to cooperate if you’re confident that the other player is also rational and knows the same things about you. In many real-world situations that is not the case, and the decision of whether to cooperate or defect will be based on the exact correlation you think your decisions have with the other party; if that number is low, then defecting is the correct choice. But if both cities are confident enough that the other follows the same decision process; say, they have the exact same political parties and structure, and all the politicians are very similar to each other; then refusing the demon’s offer is correct, since it saves the lives of 20 children.
I’ll admit to being a little confused by your comment, since I feel like I already explained these things pretty explicitly in the article? I’d like to figure out where the miscommunication is/was occurring so I can address it better.
I think the disagreement is that I think the traditional approach to the prisoners’ dilemma makes it more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners’ dilemma because I would define the prisoners’ dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both parties are better off if neither play this dominant strategy than if both do, and I would define a dominant strategy as something a rational player always plays regardless of what he things the other player would do. I realize I am kind of cheating by trying to win through definitions.
Yeah, I think that sort of presentation is anti-useful for understanding the world, since it’s picking a rather arbitrary mathematical theory and just insisting “this is what rational people do”, without getting people to think it through and understand why or if that’s actually true.
The reason a rational agent will likely defect in a realistic prisoner’s dilemma against a normal human is because it believes the human’s actions to be largely uncorrelated with its own, since it doesn’t have a good enough model of the human’s mind to know how it thinks. (And the reason why humans defect is the same, with the added obstacle that the human isn’t even rational themselves.)
Teaching that rational agents defect because that’s the Nash equilibrium and rational agents always go to the Nash equilibrium is just an incorrect model of rationality, and agents that are actually rational can consistently win against Nash-seekers.
I am defining rationality as the ability to make good decisions that get the agent what it wants. In other words, maximizing utility. Under that definition, the rational choice is to cooperate, as the article explains. You can certainly define rationality in some other way like “follows this elegant mathematical theory I’m partial to”, but when that mathematical theory leads to bad outcomes in the real world, it seems disingenuous to call that “rationality”, and I’d recommend you pick a different word for it.
As for your city example, I think you’re failing to consider the relevance of common knowledge. It’s only rational to cooperate if you’re confident that the other player is also rational and knows the same things about you. In many real-world situations that is not the case, and the decision of whether to cooperate or defect will be based on the exact correlation you think your decisions have with the other party; if that number is low, then defecting is the correct choice. But if both cities are confident enough that the other follows the same decision process; say, they have the exact same political parties and structure, and all the politicians are very similar to each other; then refusing the demon’s offer is correct, since it saves the lives of 20 children.
I’ll admit to being a little confused by your comment, since I feel like I already explained these things pretty explicitly in the article? I’d like to figure out where the miscommunication is/was occurring so I can address it better.
I think the disagreement is that I think the traditional approach to the prisoners’ dilemma makes it more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners’ dilemma because I would define the prisoners’ dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both parties are better off if neither play this dominant strategy than if both do, and I would define a dominant strategy as something a rational player always plays regardless of what he things the other player would do. I realize I am kind of cheating by trying to win through definitions.
Yeah, I think that sort of presentation is anti-useful for understanding the world, since it’s picking a rather arbitrary mathematical theory and just insisting “this is what rational people do”, without getting people to think it through and understand why or if that’s actually true.
The reason a rational agent will likely defect in a realistic prisoner’s dilemma against a normal human is because it believes the human’s actions to be largely uncorrelated with its own, since it doesn’t have a good enough model of the human’s mind to know how it thinks. (And the reason why humans defect is the same, with the added obstacle that the human isn’t even rational themselves.)
Teaching that rational agents defect because that’s the Nash equilibrium and rational agents always go to the Nash equilibrium is just an incorrect model of rationality, and agents that are actually rational can consistently win against Nash-seekers.