Thanks to Michael Dennis for proposing the formal definition; to Andrew Critch for pointing me in this direction; to Abram Demski for proposing non-negative weighting; and to Alex Appel, Scott Emmons, Evan Hubinger, philh, Rohin Shah, and Carroll Wainwright for their feedback and ideas.
There’s a good chance I’d like to publish this at some point as part of a larger work. However, I wanted to make the work available now, in case that doesn’t happen soon.
They can’t prove the conspiracy… But they could, if Steve runs his mouth.
The police chief stares at you.
You stare at the table. You’d agreed (sworn!) to stay quiet. You’d even studied game theory together. But, you hadn’t understood what an extra year of jail meant.
The police chief stares at you.
Let Steve be the gullible idealist. You have a family waiting for you.
Sunlight stretches across the valley, dappling the grass and warming your bow. Your hand anxiously runs along the bowstring. A distant figure darts between trees, and your stomach rumbles. The day is near spent.
The stags run strong and free in this land. Carla should meet you there. Shouldn’t she? Who wants to live like a beggar, subsisting on scraps of lean rabbit meat?
In your mind’s eye, you reach the stags, alone. You find one, and your arrow pierces its barrow. The beast shoots away; the rest of the herd follows. You slump against the tree, exhausted, and never open your eyes again.
you can model the “defect” action as “take some value for yourself, but destroy value in the process”.
Given that the prisoner’s dilemma is the bread and butter of game theory and of many parts of economics, evolutionary biology, and psychology, you might think that someone had already formalized this. However, to my knowledge, no one has.
Formalism
Consider a finite n-player normal-form game, with player i having pure action set Ai and payoff function Pi:A1×…×An→R. Each player i chooses a strategysi∈Δ(Ai) (a distribution over Ai). Together, the strategies form a strategy profile s:=(s1,…,sn). s−i:=(s1,…,si−1,si+1,…,sn) is the strategy profile, excluding player i’s strategy. A payoff profile contains the payoffs for all players under a given strategy profile.
A utility weighting(αj)j=1,…,n is a set of n non-negative weights (as in Harsanyi’s utilitarian theorem). You can consider the weights as quantifying each player’s contribution; they might represent a percieved social agreement or be the explicit result of a bargaining process.
When all αj are equal, we’ll call that an equal weighting. However, if there are “utility monsters”, we can downweight them accordingly.
We’re implicitly assuming that payoffs are comparable across players. We want to investigate: given a utility weighting, which actions are defections?
Definition. Player i’s action a∈Ai is a defection against strategy profile s and weighting (αj)j=1,…,n if
Personal gain: Pi(a,s−i)>Pi(si,s−i)
Social loss: ∑jαjPj(a,s−i)<∑jαjPj(si,s−i)
If such an action exists for some player i, strategy profile s, and weighting, then we say that there is an opportunity for defection in the game.
Remark. For an equal weighting, condition (2) is equivalent to demanding that the action not be a Kaldor-Hicks improvement.
Our definition seems to make reasonable intuitive sense. In the tragedy of the commons, each player rationally increases their utility, while imposing negative externalities on the other players and decreasing total utility. A spy might leak classified information, benefiting themselves and Russia but defecting against America.
Definition. Cooperation takes place when a strategy profile is maintained despite the opportunity for defection.
Theorem 1. In constant-sum games, there is no opportunity for defection against equal weightings.
Theorem 2. In common-payoff games (where all players share the same payoff function), there is no opportunity for defection.
Edit: In private communication, Joel Leibo points out that these two theorems formalize the intuition between the proverb “all’s fair in love and war”: you can’t defect in fully competitive or fully cooperative situations.
Proposition 3. There is no opportunity for defection against Nash equilibria.
An action a∈Ai is a Pareto improvement over strategy profile s if, for all players j,Pj(a,s−i)≥Pj(si,s−i).
Proposition 4. Pareto improvements are never defections.
Game Theorems
We can prove that formal defection exists in the trifecta of famous games. Feel free to skip proofs if you aren’t interested.
Theorem 5. In 2×2 symmetric games, if the Prisoner’s Dilemma inequality is satisfied, defection can exist against equal weightings.
Proof. Suppose the Prisoner’s Dilemma inequality holds. Further suppose that R>12(T+S). Then 2R>T+S. Then since T>R but T+S<2R, both players defect from (C1,C2) with Di.
Suppose instead that R≤12(T+S). Then T+S≥2R>2P, so T+S>2P. But P>S, so player 1 defects from (C1,D2) with action D1, and player 2 defects from (D1,C2) with action D2. QED.
Theorem 6. In 2×2 symmetric games, if the Stag Hunt inequality is satisfied, defection can exist against equal weightings.
Proof. Suppose that the Stag Hunt inequality is satisfied. Let p be the probability that player 1 plays Stag1. We now show that player 2 can always defect against strategy profile (p,Stag2) for some value of p.
For defection’s first condition, we determine when P2(p,Stag2)<P2(p,Hare2):
pR+(1−p)S<pT+(1−p)Pp<P−S(R−T)+(P−S).
This denominator is positive (R>T and P>S), as is the numerator. The fraction clearly falls in the open interval (0,1).
For defection’s second condition, we determine when
Since P−T≤0, this holds for some nonempty subinterval of [0,1). QED.
Theorem 7. In 2×2 symmetric games, if the Chicken inequality is satisfied, defection can exist against equal weightings.
Proof. Assume that the Chicken inequality is satisfied. This proof proceeds similarly as in theorem 6. Let p be the probability that player 1′s strategy places on Turn1.
For defection’s first condition, we determine when P2(p,Turn2)<P2(p,Ahead2):
The inequality flips in the first equation because of the division by (R−T)+(P−S), which is negative (T>R and S>P). S>P, so p>0; this reflects the fact that (Ahead1,Turn2) is a Nash equilibrium, against which defection is impossible (proposition 3).
For defection’s second condition, we determine when
The inequality again flips because (R−T)+(P−S) is negative. When R≤12(T+S), we have p<1, in which case defection does not exist against a pure strategy profile.
Combining the two conditions, we have
12(S−P)+(T−P)(T−R)+(S−P)>p>S−P(T−R)+(S−P)>0.
Because T>S,
12(S−P)+(T−P)(T−R)+(S−P)>S−P(T−R)+(S−P).
QED.
Discussion
This bit of basic theory will hopefully allow for things like principled classification of policies: “has an agent learned a ‘non-cooperative’ policy in a multi-agent setting?”. For example, the empirical game-theoretic analyses (EGTA) of Leibo et al.’s Multi-agent Reinforcement Learning in Sequential Social Dilemmas say that apple-harvesting agents are defecting when they zap each other with beams. Instead of using a qualitative metric, you could choose a desired non-zapping strategy profile, and then use EGTA to classify formal defections from that. This approach would still have a free parameter, but it seems better.
I had vague pre-theoretic intuitions about ‘defection’, and now I feel more capable of reasoning about what is and isn’t a defection. In particular, I’d been confused by the difference between power-seeking and defection, and now I’m not.
What counts as defection?
Thanks to Michael Dennis for proposing the formal definition; to Andrew Critch for pointing me in this direction; to Abram Demski for proposing non-negative weighting; and to Alex Appel, Scott Emmons, Evan Hubinger, philh, Rohin Shah, and Carroll Wainwright for their feedback and ideas.
There’s a good chance I’d like to publish this at some point as part of a larger work. However, I wanted to make the work available now, in case that doesn’t happen soon.
People talk about ‘defection’ in social dilemma games, from the prisoner’s dilemma to stag hunt to chicken. In the tragedy of the commons, we talk about defection. The concept has become a regular part of LessWrong discourse.
Informal definition. A player defects when they increase their personal payoff at the expense of the group.
This informal definition is no secret, being echoed from the ancient Formal Models of Dilemmas in Social Decision-Making to the recent Classifying games like the Prisoner’s Dilemma:
Given that the prisoner’s dilemma is the bread and butter of game theory and of many parts of economics, evolutionary biology, and psychology, you might think that someone had already formalized this. However, to my knowledge, no one has.
Formalism
Consider a finite n-player normal-form game, with player i having pure action set Ai and payoff function Pi:A1×…×An→R. Each player i chooses a strategy si∈Δ(Ai) (a distribution over Ai). Together, the strategies form a strategy profile s:=(s1,…,sn). s−i:=(s1,…,si−1,si+1,…,sn) is the strategy profile, excluding player i’s strategy. A payoff profile contains the payoffs for all players under a given strategy profile.
A utility weighting (αj)j=1,…,n is a set of n non-negative weights (as in Harsanyi’s utilitarian theorem). You can consider the weights as quantifying each player’s contribution; they might represent a percieved social agreement or be the explicit result of a bargaining process.
When all αj are equal, we’ll call that an equal weighting. However, if there are “utility monsters”, we can downweight them accordingly.
We’re implicitly assuming that payoffs are comparable across players. We want to investigate: given a utility weighting, which actions are defections?
Definition. Player i’s action a∈Ai is a defection against strategy profile s and weighting (αj)j=1,…,n if
Personal gain: Pi(a,s−i)>Pi(si,s−i)
Social loss: ∑jαjPj(a,s−i)<∑jαjPj(si,s−i)
If such an action exists for some player i, strategy profile s, and weighting, then we say that there is an opportunity for defection in the game.
Remark. For an equal weighting, condition (2) is equivalent to demanding that the action not be a Kaldor-Hicks improvement.
Our definition seems to make reasonable intuitive sense. In the tragedy of the commons, each player rationally increases their utility, while imposing negative externalities on the other players and decreasing total utility. A spy might leak classified information, benefiting themselves and Russia but defecting against America.
Definition. Cooperation takes place when a strategy profile is maintained despite the opportunity for defection.
Theorem 1. In constant-sum games, there is no opportunity for defection against equal weightings.
Theorem 2. In common-payoff games (where all players share the same payoff function), there is no opportunity for defection.
Edit: In private communication, Joel Leibo points out that these two theorems formalize the intuition between the proverb “all’s fair in love and war”: you can’t defect in fully competitive or fully cooperative situations.
Proposition 3. There is no opportunity for defection against Nash equilibria.
An action a∈Ai is a Pareto improvement over strategy profile s if, for all players j,Pj(a,s−i)≥Pj(si,s−i).
Proposition 4. Pareto improvements are never defections.
Game Theorems
We can prove that formal defection exists in the trifecta of famous games. Feel free to skip proofs if you aren’t interested.
Theorem 5. In 2×2 symmetric games, if the Prisoner’s Dilemma inequality is satisfied, defection can exist against equal weightings.
Proof. Suppose the Prisoner’s Dilemma inequality holds. Further suppose that R>12(T+S). Then 2R>T+S. Then since T>R but T+S<2R, both players defect from (C1,C2) with Di.
Suppose instead that R≤12(T+S). Then T+S≥2R>2P, so T+S>2P. But P>S, so player 1 defects from (C1,D2) with action D1, and player 2 defects from (D1,C2) with action D2. QED.
Theorem 6. In 2×2 symmetric games, if the Stag Hunt inequality is satisfied, defection can exist against equal weightings.
Proof. Suppose that the Stag Hunt inequality is satisfied. Let p be the probability that player 1 plays Stag1. We now show that player 2 can always defect against strategy profile (p,Stag2) for some value of p.
For defection’s first condition, we determine when P2(p,Stag2)<P2(p,Hare2):
pR+(1−p)S<pT+(1−p)Pp<P−S(R−T)+(P−S).This denominator is positive (R>T and P>S), as is the numerator. The fraction clearly falls in the open interval (0,1).
For defection’s second condition, we determine when
P1(p,Stag2)+P2(p,Stag2)>P1(p,Hare2)+P2(p,Hare2)2pR+(1−p)(T+S)>p(S+T)+(1−p)2Pp>12(P−S)+(P−T)(R−T)+(P−S).Combining the two conditions, we have
1>P−S(R−T)+(P−S)>p>12(P−S)+(P−T)(R−T)+(P−S).
Since P−T≤0, this holds for some nonempty subinterval of [0,1). QED.
Theorem 7. In 2×2 symmetric games, if the Chicken inequality is satisfied, defection can exist against equal weightings.
Proof. Assume that the Chicken inequality is satisfied. This proof proceeds similarly as in theorem 6. Let p be the probability that player 1′s strategy places on Turn1.
For defection’s first condition, we determine when P2(p,Turn2)<P2(p,Ahead2):
pR+(1−p)S<pT+(1−p)Pp>P−S(R−T)+(P−S)1≥p>S−P(T−R)+(S−P)>0.
The inequality flips in the first equation because of the division by (R−T)+(P−S), which is negative (T>R and S>P). S>P, so p>0; this reflects the fact that (Ahead1,Turn2) is a Nash equilibrium, against which defection is impossible (proposition 3).
For defection’s second condition, we determine when
P1(p,Turn2)+P2(p,Turn2)>P1(p,Ahead2)+P2(p,Ahead2)2pR+(1−p)(T+S)>p(S+T)+(1−p)2Pp<12(P−S)+(P−T)(R−T)+(P−S)p<12(S−P)+(T−P)(T−R)+(S−P).
The inequality again flips because (R−T)+(P−S) is negative. When R≤12(T+S), we have p<1, in which case defection does not exist against a pure strategy profile.
Combining the two conditions, we have
12(S−P)+(T−P)(T−R)+(S−P)>p>S−P(T−R)+(S−P)>0.Because T>S,
12(S−P)+(T−P)(T−R)+(S−P)>S−P(T−R)+(S−P).QED.
Discussion
This bit of basic theory will hopefully allow for things like principled classification of policies: “has an agent learned a ‘non-cooperative’ policy in a multi-agent setting?”. For example, the empirical game-theoretic analyses (EGTA) of Leibo et al.’s Multi-agent Reinforcement Learning in Sequential Social Dilemmas say that apple-harvesting agents are defecting when they zap each other with beams. Instead of using a qualitative metric, you could choose a desired non-zapping strategy profile, and then use EGTA to classify formal defections from that. This approach would still have a free parameter, but it seems better.
I had vague pre-theoretic intuitions about ‘defection’, and now I feel more capable of reasoning about what is and isn’t a defection. In particular, I’d been confused by the difference between power-seeking and defection, and now I’m not.