I’d go so far as to say that anyone who advocates cooperating in a one-shot prisoners’ dilemma simply doesn’t understand the setting. By definition, defecting gives you a better outcome than cooperating. Anyone who claims otherwise is changing the definition of the prisoners’ dilemma.
I think this is correct. I think the reason to cooperate is not to get the best personal outcome, but because you care about the other person. I think we have evolved to cooperate, or perhaps that should be stated as we have evolved to want to cooperate. We have evolved to value cooperating. Our values come from our genes and our memes, and both are subject to evolution, to natural selection. But we want to cooperate.
So if I am in a prisoner’s dilemma against another human, if I perceive that other human as “one of us,” I will choose cooperation. Essentially, I care about their outcome. But in a one-shot PD defecting is the “better” strategy. The problem is that with genetic and/or memetic evolution of cooperation, we are not playing in a one-shot PD. We are playing with a set of values that developed over many shots.
Of course we don’t always cooperate. But when we do cooperate in one-shot PD’s, it is because, in some sense, there are so darn many one-shot PD’s, especially in the universe of hypotheticals, that we effectively know there is no such thing as a one-shot PD. This should not be too hard to accept around here where people semi-routinely accept simulations of themselves or clones of themselves as somehow just as important as their actual selves. I.e. we don’t even accept the “one-shottedness” of ourselves.
I think the reason to cooperate is not to get the best personal outcome, but because you care about the other person.
If you have 100% identical consequentialist values to all other humans, then that means ‘cooperation’ and ‘defection’ are both impossible for humans (because they can’t be put in PDs). Yet it will still be correct to defect (given that your decision and the other player’s decision don’t strongly depend on each other) if you ever run into an agent that doesn’t share all your values. See The True Prisoner’s Dilemma.
This shows that the iterated dilemma and the dilemma-with-common-knowledge-of-rationality allow cooperation (i.e., giving up on your goal to enable someone else to achieve a goal you genuinely don’t want them to achieve), whereas loving compassion and shared values merely change goal-content. To properly visualize the PD, you need an actual value conflict—e.g., imagine you’re playing against a serial killer in a hostage negotiation. ‘Cooperating’ is just an English-language label; the important thing is the game-theoretic structure, which allows that sometimes ‘cooperating’ looks like letting people die in order to appease a killer’s antisocial goals.
To properly visualize the PD, you need an actual value conflict
I think belief conflicts might work, even if the same values are shared. Suppose you and I are at a control panel for three remotely wired bombs in population centers. Both of us want as many people to live as possible. One bomb will go off in ten seconds unless we disarm it, but the others will stay inert unless activated. I believe that pressing the green button causes all bombs to explode, and pressing the red button defuses the time bomb. You believe the same thing, but with the colors reversed. Both of us would rather that no buttons be pressed than both buttons be pressed, but each of us would prefer that just the defuse button be pressed, and that the other person not mistakenly kill all three groups. (Here, attempting to defuse is ‘defecting’ and not attempting to defuse is ‘cooperating’.)
[Edit]: As written, in terms of lives saved, this doesn’t have the property that (D,D)>(C,D); if I press my button, you are indifferent between pressing your button or not. So it’s not true that D strictly dominates C, but the important part of the structure is preserved, and a minor change could make it so D strictly dominates C.
I think belief conflicts might work, even if the same values are shared.
You can solve belief conflicts simply by trading in a prediction market with decision-contingent contracts (a “decision market”). Value conflicts are more general than that.
I think this is misusing the word “general.” Value conflicts are more narrow than the full class of games that have the PD preference ordering. I do agree that value conflicts are harder to resolve than belief conflicts, but that doesn’t make them more general.
If you have 100% identical consequentialist values to all other humans, then that means ‘cooperation’ and ‘defection’ are both impossible for humans (because they can’t be put in PDs). … To properly visualize the PD, you need an actual value conflict
True, but the flip side of this is that efficiency (in Coasian terms) is precisely defined as pursuing 100% identical consequentialist values, where the shared “values” are determined by a weighted sum of each agent’s utility function (and the weights are typically determined by agent endowments).
I think the reason to cooperate is not to get the best personal outcome, but because you care about the other person.
I just want to make it clear that by saying this, you’re changing the setting of the prisoners’ dilemma, so you shouldn’t even call it a prisoners’ dilemma anymore. The prisoners’ dilemma is defined so that you get more utility by defecting; if you say you care about your opponent’s utility enough to cooperate, it means you don’t get more utility by defecting, since cooperation gives you utility. Therefore, all you’re saying is that you can never be in a true prisoners’ dilemma game; you’re NOT saying that in a true PD, it’s correct to cooperate (again, by definition, it isn’t).
The most likely reason people are evolutionarily predisposed to cooperate in real-life PDs is that almost all real-life PDs are repeated games and not one-shot. Repeated prisoners’ dilemmas are completely different beasts, and it can definitely be correct to cooperate in them.
I think this is correct. I think the reason to cooperate is not to get the best personal outcome, but because you care about the other person. I think we have evolved to cooperate, or perhaps that should be stated as we have evolved to want to cooperate. We have evolved to value cooperating. Our values come from our genes and our memes, and both are subject to evolution, to natural selection. But we want to cooperate.
So if I am in a prisoner’s dilemma against another human, if I perceive that other human as “one of us,” I will choose cooperation. Essentially, I care about their outcome. But in a one-shot PD defecting is the “better” strategy. The problem is that with genetic and/or memetic evolution of cooperation, we are not playing in a one-shot PD. We are playing with a set of values that developed over many shots.
Of course we don’t always cooperate. But when we do cooperate in one-shot PD’s, it is because, in some sense, there are so darn many one-shot PD’s, especially in the universe of hypotheticals, that we effectively know there is no such thing as a one-shot PD. This should not be too hard to accept around here where people semi-routinely accept simulations of themselves or clones of themselves as somehow just as important as their actual selves. I.e. we don’t even accept the “one-shottedness” of ourselves.
If you have 100% identical consequentialist values to all other humans, then that means ‘cooperation’ and ‘defection’ are both impossible for humans (because they can’t be put in PDs). Yet it will still be correct to defect (given that your decision and the other player’s decision don’t strongly depend on each other) if you ever run into an agent that doesn’t share all your values. See The True Prisoner’s Dilemma.
This shows that the iterated dilemma and the dilemma-with-common-knowledge-of-rationality allow cooperation (i.e., giving up on your goal to enable someone else to achieve a goal you genuinely don’t want them to achieve), whereas loving compassion and shared values merely change goal-content. To properly visualize the PD, you need an actual value conflict—e.g., imagine you’re playing against a serial killer in a hostage negotiation. ‘Cooperating’ is just an English-language label; the important thing is the game-theoretic structure, which allows that sometimes ‘cooperating’ looks like letting people die in order to appease a killer’s antisocial goals.
I think belief conflicts might work, even if the same values are shared. Suppose you and I are at a control panel for three remotely wired bombs in population centers. Both of us want as many people to live as possible. One bomb will go off in ten seconds unless we disarm it, but the others will stay inert unless activated. I believe that pressing the green button causes all bombs to explode, and pressing the red button defuses the time bomb. You believe the same thing, but with the colors reversed. Both of us would rather that no buttons be pressed than both buttons be pressed, but each of us would prefer that just the defuse button be pressed, and that the other person not mistakenly kill all three groups. (Here, attempting to defuse is ‘defecting’ and not attempting to defuse is ‘cooperating’.)
[Edit]: As written, in terms of lives saved, this doesn’t have the property that (D,D)>(C,D); if I press my button, you are indifferent between pressing your button or not. So it’s not true that D strictly dominates C, but the important part of the structure is preserved, and a minor change could make it so D strictly dominates C.
You can solve belief conflicts simply by trading in a prediction market with decision-contingent contracts (a “decision market”). Value conflicts are more general than that.
I think this is misusing the word “general.” Value conflicts are more narrow than the full class of games that have the PD preference ordering. I do agree that value conflicts are harder to resolve than belief conflicts, but that doesn’t make them more general.
True, but the flip side of this is that efficiency (in Coasian terms) is precisely defined as pursuing 100% identical consequentialist values, where the shared “values” are determined by a weighted sum of each agent’s utility function (and the weights are typically determined by agent endowments).
I just want to make it clear that by saying this, you’re changing the setting of the prisoners’ dilemma, so you shouldn’t even call it a prisoners’ dilemma anymore. The prisoners’ dilemma is defined so that you get more utility by defecting; if you say you care about your opponent’s utility enough to cooperate, it means you don’t get more utility by defecting, since cooperation gives you utility. Therefore, all you’re saying is that you can never be in a true prisoners’ dilemma game; you’re NOT saying that in a true PD, it’s correct to cooperate (again, by definition, it isn’t).
The most likely reason people are evolutionarily predisposed to cooperate in real-life PDs is that almost all real-life PDs are repeated games and not one-shot. Repeated prisoners’ dilemmas are completely different beasts, and it can definitely be correct to cooperate in them.