If “Defect” has the wrong connotations, that seems to me like a reason to pick a different label for the math, rather than switching to different math.
I think that this is often an issue of differing beliefs among the players and different weightings over player payoffs. In What Counts as Defection?, I wrote:
Informal definition. A player defects when they increase their personal payoff at the expense of the group.
I went on to formalize this as
Definition. Player i’s action a∈Ai is a defection against strategy profile s and weighting (αj)j=1,…,n if
Personal gain: Pi(a,s−i)>Pi(si,s−i)
Social loss: ∑jαjPj(a,s−i)<∑jαjPj(si,s−i)
Under this model, this implies two potential sources of disagreement about defections:
Disagreement in beliefs. You think everyone agreed to hunt stag, I’m not so sure; I hunt rabbit, and you say I defected; I disagree. Under your beliefs (the strategy profile s you thought we’d agreed to follow), it was a defection. You thought we’d agreed on the hunt-stag profile. In fact, it was worse than a defection, because there wasn’t personal gain for me—I just sabotaged the group because I was scared (condition 2 above).
Under my beliefs, it wasn’t a defection—I thought it was quite unlikely that we would all hunt stag, and so I salvaged the situation by hunting rabbit.
Disagreement in weighting (αj). There might be an implicit social contract—if we both did half the work on a project, it would be a defection for me to take all of the credit. But if there’s no implicit agreement and we’re “just” playing a constant-sum game, that would just be me being rational. Tough luck, it’s a tough world out there in those normal-form games!
Speculation: This explains why it can sometimes feel rightto defect in PD. This need not be because of our “terminal values” agreeing with the other player (i.e. I’d feel bad if Bob went to jail for 10 years), but where the rightness is likely judged by the part of our brain that helps us “be the reliable kind of person with whom one can cooperate” by making us feel bad for transgressions/defections, even against someone with orthogonal terminal values. If there’s no (implicit) contract, then that bad feeling might not pop up.
I think this explains (at least part of) why defection in stag hunt can “feel different” than defection in PD.
(I think the general principle of “try during the review to holistically review sequences/followups/concepts” makes sense. But I still feel confused about how to actually operationalize that such that the process is clear and outputs a coherent product)
I think that this is often an issue of differing beliefs among the players and different weightings over player payoffs. In What Counts as Defection?, I wrote:
I went on to formalize this as
Under this model, this implies two potential sources of disagreement about defections:
Disagreement in beliefs. You think everyone agreed to hunt stag, I’m not so sure; I hunt rabbit, and you say I defected; I disagree. Under your beliefs (the strategy profile s you thought we’d agreed to follow), it was a defection. You thought we’d agreed on the hunt-stag profile. In fact, it was worse than a defection, because there wasn’t personal gain for me—I just sabotaged the group because I was scared (condition 2 above).
Under my beliefs, it wasn’t a defection—I thought it was quite unlikely that we would all hunt stag, and so I salvaged the situation by hunting rabbit.
Disagreement in weighting (αj). There might be an implicit social contract—if we both did half the work on a project, it would be a defection for me to take all of the credit. But if there’s no implicit agreement and we’re “just” playing a constant-sum game, that would just be me being rational. Tough luck, it’s a tough world out there in those normal-form games!
Speculation: This explains why it can sometimes feel right to defect in PD. This need not be because of our “terminal values” agreeing with the other player (i.e. I’d feel bad if Bob went to jail for 10 years), but where the rightness is likely judged by the part of our brain that helps us “be the reliable kind of person with whom one can cooperate” by making us feel bad for transgressions/defections, even against someone with orthogonal terminal values. If there’s no (implicit) contract, then that bad feeling might not pop up.
I think this explains (at least part of) why defection in stag hunt can “feel different” than defection in PD.
I’m mulling this over in the context of “How should the review even work for concepts that have continued to get written-on since 2018?”. I notice that the ideal Schelling Choice is Rabbit post relies a bit on both “What Counts as Defection” and “Most prisoner’s dilemmas are stag hunts, most stag hunts are battles of the sexes”, which both came later.
(I think the general principle of “try during the review to holistically review sequences/followups/concepts” makes sense. But I still feel confused about how to actually operationalize that such that the process is clear and outputs a coherent product)