Quick sketch of an idea (written before deeply digesting others’ proposals):
Intuition: Just like player 1 has a best response (starting from a strategy profile s, improve her own utility as much as possible), she also has an altruistic best response (which maximally improves the other player’s utility).
Example: stag hunt. If we’re at (rabbit, rabbit), then both players are perfectly aligned. Even if player 1 was infinitely altruistic, she can’t unilaterally cause a better outcome for player 2.
Definition: given a strategy profile s, an a-altruistic better response is any strategy of one player that gives the other player at least a extra utility for each point of utility that this player sacrifices.
Definition: player 1 is a-aligned with player 2 if player 1 doesn’t have an x-altruistic better response for any x>a.
0-aligned: non-spiteful player. They’ll give “free” utility to other players if possible, but they won’t sacrifice any amount of their own utility for the sake of others.
c-aligned for c∈(0,1): slightly altruistic. Your happiness matters a little bit to them, but not as much as their own.
1-aligned: positive-sum maximizer. They’ll yield their own utility as long as the total sum of utility increases.
c-aligned for c∈(1,∞): subservient player: They’ll optimize your utility with higher priority than their own.
∞-aligned: slave. They maximize others’ utility, completely disregarding their own.
Obvious extension from players to strategy profiles: How altruistic would a player need to be before they would switch strategies?
Quick sketch of an idea (written before deeply digesting others’ proposals):
Intuition: Just like player 1 has a best response (starting from a strategy profile s, improve her own utility as much as possible), she also has an altruistic best response (which maximally improves the other player’s utility).
Example: stag hunt. If we’re at (rabbit, rabbit), then both players are perfectly aligned. Even if player 1 was infinitely altruistic, she can’t unilaterally cause a better outcome for player 2.
Definition: given a strategy profile s, an a-altruistic better response is any strategy of one player that gives the other player at least a extra utility for each point of utility that this player sacrifices.
Definition: player 1 is a-aligned with player 2 if player 1 doesn’t have an x-altruistic better response for any x>a.
0-aligned: non-spiteful player. They’ll give “free” utility to other players if possible, but they won’t sacrifice any amount of their own utility for the sake of others.
c-aligned for c∈(0,1): slightly altruistic. Your happiness matters a little bit to them, but not as much as their own.
1-aligned: positive-sum maximizer. They’ll yield their own utility as long as the total sum of utility increases.
c-aligned for c∈(1,∞): subservient player: They’ll optimize your utility with higher priority than their own.
∞-aligned: slave. They maximize others’ utility, completely disregarding their own.
Obvious extension from players to strategy profiles: How altruistic would a player need to be before they would switch strategies?
On re-reading this I messed up something with the direction of the signs. Don’t have time to fix it now, but the idea is hopefully clear.