I think this is backward. The game’s payout matrix determines the alignment. Fixed-sum games imply (in the mathematical sense) unaligned players, and common-payoff games ARE the definition of alignment.
When you start looking at meta-games (where resource payoffs differ from utility payoffs, based on agent goals), then “alignment” starts to make sense as a distinct measurement—it’s how much the players’ utility functions transform the payoffs (in the sub-games of a series, and in the overall game) from fixed-sum to common-payoff.
I don’t follow. How can fixed-sum games mathematically imply unaligned players, without a formal metric of alignment between the players?
Also, the payout matrix need not determine the alignment, since each player could have a different policy from strategy profiles to responses, which in principle doesn’t have to select a best response. For example, imagine playing stag hunt with someone who responds ‘hare’ to stag/stag; this isn’t a best response for them, but it minimizes your payoff. However, another partner could respond ‘stag’ to stag/stag, which (I think) makes them “less unaligned with you” with you than the partner who responds ‘hare’ to stag/stag.
Payout correlation IS the metric of alignment. A player who isn’t trying to maximize their (utility) payout is actually not playing the game you’ve defined. You’re simply incorrect (or describing a different payout matrix than you state) that a player doesn’t “have to select a best response”.
I think “X and Y are playing a game of stag hunt” has multiple meanings.
The meaning generally assumed in game theory when considering just a single game is that the outcomes in the game matrix are utilities. In that case, I completely agree with Dagon: if on some occasion you prefer to pick “hare” even though you know I will pick “stag”, then we are not actually playing the stag hunt game. (Because part of what it means to be playing stag hunt rather than some other game is that we both consider (stag,stag) the best outcome.)
But there are some other situations that might be described by saying that X and Y are playing stag hunt.
Maybe we are playing an iterated stag hunt. Then (by definition) what I care about is still some sort of aggregation of per-round outcomes, and (by definition) each round’s outcome still has (stag,stag) best for me, etc. -- but now I need to strategize over the whole course of the game, and e.g. maybe I think that on a particular occasion choosing “hare” when you chose “stag” will make you understand that you’re being punished for a previous choice of “hare” and make you more likely to choose “stag” in future.
Or maybe we’re playing an iterated iterated stag hunt. Now maybe I choose “hare” when you chose “stag”, knowing that it will make things worse for me over subsequent rounds, but hoping that other people looking at our interactions will learn the rule Don’t Fuck With Gareth and never, ever choose anything other than “stag” when playing with me.
Or maybe we’re playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we’re in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases—but maybe I’m a billionaire and literally don’t care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you’re a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I’m actually evil and want you to do as badly as possible.
In the iterated cases, it seems to me that the payout matrix still determines alignment given the iteration context—how many games, with what opponents, with what aggregation of per-round utilities to yield overall utility (in prospect or in retrospect; the former may involve temporal discounting too). If I don’t consider a long string of (stag,stag) games optimal then, again, we are not really playing (iterated) stag hunt.
In the payouts-aren’t-really-utilities case, I think it does make sense to ask about the players’ alignment, in terms of how they translate payouts into utilities. But … it feels to me as if this is now basically separate from the actual game itself: the thing we might want to map to a measure of alignedness is something like the function from (both players’ payouts) to (both players’ utilities). The choice of game may then affect how far unaligned players imply unaligned actions, though. (In a game with (cooperate,defect) options where “cooperate” is always much better for the player choosing it than “defect”, the payouts->utilities function would need to be badly anti-aligned, with players actively preferring to harm one another, in order to get uncooperative actions; in a prisoners’ dilemma, it suffices that it not be strongly aligned; each player can slightly prefer the other to do better but still choose defection.)
In that case, I completely agree with Dagon: if on some occasion you prefer to pick “hare” even though you know I will pick “stag”, then we are not actually playing the stag hunt game. (Because part of what it means to be playing stag hunt rather than some other game is that we both consider (stag,stag) the best outcome.)
It seems to me like you’re assuming that players must respond rationally, or else they’re playing a different game, in some sense. But why? The stag hunt game is defined by a certain set of payoff inequalities holding in the game. Both players can consider (stag,stag) the best outcome, but that doesn’t mean they have to play stag against (stag, stag). That requires further rationality assumptions (which I don’t think are necessary in this case).
If I’m playing against someone who always defects against cooperate/cooperate, versus against someone who always cooperates against cooperate/cooperate, am I “not playing iterated PD” in one of those cases?
I’m not 100% sure I am understanding your terminology. What does it mean to “play stag against (stag,stag)” or to “defect against cooperate/cooperate”?
If your opponent is not in any sense a utility-maximizer then I don’t think it makes sense to talk about your opponent’s utilities, which means that it doesn’t make sense to have a payout matrix denominated in utility, which means that we are not in the situation of my second paragraph above (“The meaning generally assumed in game theory...”).
We might be in the situation of my last-but-two paragraph (“Or maybe we’re playing a game in which...”): the payouts might be something other than utilities. Dollars, perhaps, or just numbers written on a piece of paper. In that case, all the things I said about that situation apply here. In particular, I agree that it’s then reasonable to ask “how aligned is B with A’s interests?”, but I think this question is largely decoupled from the specific game and is more about the mapping from (A’s payout, B’s payout) to (A’s utility, B’s utility).
I guess there are cases where that isn’t enough, where A’s and/or B’s utility is not a function of the payouts alone. Maybe A just likes saying the word “defect”. Maybe B likes to be seen as the sort of person who cooperates. Etc. But at this point it feels to me as if we’ve left behind most of the simplicity and elegance that we might have hoped to bring by adopting the “two-player game in normal form” formalism in the first place, and if you’re prepared to consider scenarios where A just likes choosing the top-left cell in a 2x2 array then you also need to consider ones like the ones I described earlier in this paragraph—where in fact it’s not just the 2x2 payout matrix that matters but potentially any arbitrary details about what words are used when playing the game, or who is watching, or anything else. So if you’re trying to get to the essence of alignment by considering simple 2x2 games, I think it would be best to leave that sort of thing out of it, and in that case my feeling is that your options are (a) to treat the payouts as actual utilities (in which case, once again, I agree with Dagon and think all the alignment information is in the payout matrix), or (b) to treat them as mere utility-function-fodder, but to assume that they’re all the fodder the utility functions get (in which case, as above, I think none of the alignment information is in the payout matrix and it’s all in the payouts-to-utilities mapping), or (c) to consider some sort of iterated-game setup (in which case, I think you need to nail down what sort of iterated-game setup before asking how to get a measure of alignment out of it).
I’m not 100% sure I am understanding your terminology. What does it mean to “play stag against (stag,stag)” or to “defect against cooperate/cooperate”?
Let πi(σ)=σ′i be player i’s response function to strategy profile σ. Given some strategy profile (like stag/stag), player i selects a response. I mean “response” in terms of “best response”—I don’t necessarily mean that there’s an iterated game. This captures all the relevant “outside details” for how decisions are made.
If your opponent is not in any sense a utility-maximizer then I don’t think it makes sense to talk about your opponent’s utilities, which means that it doesn’t make sense to have a payout matrix denominated in utility
I don’t think I understand where this viewpoint is coming from. I’m not equating payoffs with VNM-utility, and I don’t think game theory usually does either—for example, the maxmin payoff solution concept does not involve VNM-rational expected utility maximization. I just identify payoffs with “how good is this outcome for the player”, without also demanding that πi always select a best response. Maybe it’s Boltzmann rational, or maybe it just always selects certain actions (regardless of their expected payouts).
or (b) to treat them as mere utility-function-fodder, but to assume that they’re all the fodder the utility functions get (in which case, as above, I think none of the alignment information is in the payout matrix and it’s all in the payouts-to-utilities mapping)
There exist two payoff functions. I think I want to know how impact-aligned one player is with another: how do the player’s actual actions affect the other player (in terms of their numerical payoff values). I think (c) is closest to what I’m considering, but in terms of response functions—not actual iterated games.
Sorry, I’m guessing this probably still isn’t clear, but this is the reply I have time to type right now and I figured I’d send it rather than nothing.
Sorry, I think I wasn’t clear about what I don’t understand. What is a “strategy profile (like stag/stag)”? So far as I can tell, the usual meaning of “strategy profile” is the same as that of “strategy”, and a strategy in a one-shot game of stag hunt looks like “stag” or “hare”, or maybe “70% stag, 30% hare”; I don’t understand what “stag/stag” means here.
----
It is absolutely standard in game theory to equate payoffs with utilities. That doesn’t mean that you have to do the same, of course, but I’m sure that’s why Dagon said what he did and it’s why when I was enumerating possible interpretations that was the first one I mentioned.
(The next several paragraphs are just giving some evidence for this; I had a look on my shelves and described what I found. Most detail is given for the one book that’s specifically about formalized 2-player game theory.)
“Two-Person Game Theory” by Rapoport, which happens to be the only book dedicated to this topic I have on my shelves, says this at the start of chapter 2 (titled “Utilities”):
So far nothing has been said about the nature of the payoffs. [...] It is even conceivable that a man playing Checkers with a child would rather lose than win. In that case a larger payoff must be assigned to his loss than to his win. [...] the game remains undefined if we do not know what payoff magnitudes are assigned by the players to the outcomes, even if the latter are specified in terms of monetary payoffs. However, this problem is bypassed by the game theoretician, who assumes that the payoffs are given.
Unfortunately, Rapoport is using the word “payoffs” to mean two different things here. I think it’s entirely clear from context, though, that his actual meaning is: you may begin by specifying monetary payoffs, but what we care about for game theory is payoffs as utilities. Here’s more from a little later in the chapter:
The given payoffs are assumed to reflect the psychological worth of the associated outcomes to the player in question.
A bit later:
When payoffs are specified on an interval scale [as opposed to an “ordinal scale” where you just say which ones are better than which other ones—gjm], they are called utilities.
and:
As has already been pointed out, these matters are not of concern to the game theoretician. His position is that if utility scales can be determined, then a theory of games can be built on a reliable foundation. If no such utility scale can be established with references to any real subjects, then game theory will not be relevant to the behaviour of people in either a normative or descriptive sense.
As I say, that’s the only book of formal game theory on my shelves. Schelling’s Strategy of Conflict has a little to say about such games, but not much and not in much detail, but it looks to me as if he assumes payoffs are utilities. The following sentence is informative, though it presupposes rather than stating: “But what configuration of value systems for the two participants—of the “payoffs”, in the language of game theory—makes a deterrent threat credible?” (This is from the chapter entitled “International Strategy”; in my copy it’s on page 13.)
Rapoport’s “Strategy and Conscience” isn’t a book of formal game theory, but it does discuss the topic, and it explicitly says: payoffs are utilities.
One chapter in Schelling’s “Choice and Consequence” is concerned with this sort of game theory; he says that the numbers you put in the matrix are either arbitrary things whose relative ordering is the only thing that matters, or numbers that behave like utilities in the sense that the players are trying to maximize their expectations.
The Wikipedia article on game theory says: “The payoffs of the game are generally taken to represent the utility of individual players.” (This is in the section about the use of game theory in economics and business. It does also mention applications in evolutionary biology, where the payoffs are fitnesses—which seem to me very closely analogous to utilities, in that what the evolutionary process stochastically maximizes is something like expected fitness.)
Again, I don’t claim that you have to equate payoffs with utilities; you can apply the formalism of game theory in any way you please! But I don’t think there’s any question that this is the usual way in which payoffs in a game matrix are understood.
----
It feels odd to me to focus on response functions, since as a matter of fact you never actually know the other player’s strategy. (Aside from special cases where your opponent is sufficiently deterministic and sufficiently simple that you can “read their source code” and make reliable predictions from it. There’s a bit of an LW tradition of thinking in those terms, but I think that with the possible exception of reasoning along the lines of “X is an exact copy of me and will therefore make the same decisions as I do” it’s basically never going to be relevant to real decision-making agents because the usual case is that the other player is about as complicated as you are, and you don’t have enough brainpower to understand your own brain completely.)
If you are not considering payouts to be utilities, then you need to note that knowing the other player’s payouts—which is a crucial part of playing this sort of game—doesn’t tell you anything until you also know how those payouts correspond to utilities, or to whatever else the other player might use to guide their decision-making.
(If you aren’t considering that they’re utilities but are assuming that higher is better, then for many purposes that’s enough. But, again, only if you suppose that the other player does actually act as someone would act who prefers higher payouts to lower ones.)
My feeling is that you will get most insight by adopting (what I claim to be) the standard perspective where payoffs are utilities; then, if you want to try to measure alignment, the payoff matrix is the input for your calculation. Obviously this won’t work if one or both players behave in a way not describable by any utility function, but my suspicion is that in such cases you shouldn’t necessarily expect there to be any sort of meaningful measure of how aligned the players are.
Quote: Or maybe we’re playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we’re in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases—but maybe I’m a billionaire and literally don’t care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you’re a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I’m actually evil and want you to do as badly as possible.
So, if the other player is “always cooperate” or “always defect” or any other method of determining results that doesn’t correspond to the payouts in the matrix shown to you, then you aren’t playing “prisoner’s dillema” because the utilities to player B are not dependent on what you do. In all these games, you should pick your strategy based on how you expect your counterparty to act, which might or might not include the “in game” incentives as influencers of their behavior.
In static games of complete, perfect information, a normal-form representation of a game is a specification of players’ strategy spaces and payoff functions.
You are playing prisoner’s dilemma when certain payoff inequalities are satisfied in the normal-form representation. That’s it. There is no canonical assumption that players are expected utility maximizers, or expected payoff maximizers.
because the utilities to player B are not dependent on what you do.
Noting that I don’t follow what you mean by this: do you mean to say that player B’s response cannot be a constant function of strategy profiles (ie the response function cannot be constant everywhere)?
Um… the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.
Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a “stag hunt” or “prisoner’s dillema” or “dating game”
the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.
No. In that article, the only spot where ‘utility’ appears is identifying utility with the player’s payoffs/payouts. (EDIT: but perhaps I don’t get what you mean by ‘in-game payouts’?)
that player’s set of payoffs (normally the set of real numbers, where the number represents a cardinal or ordinal utility—often cardinal in the normal-form representation)
To reiterate: I’m not talking about VNM-utility, derived by taking a preference ordering-over-lotteries and back out a coherent utility function. I’m talking about the players having payoff functions which cardinally represent the value of different outcomes. We can call the value-units “squiggles”, or “utilons”, or “payouts”; the OP’s question remains.
Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a “stag hunt” or “prisoner’s dillema” or “dating game”
Do you have a citation? You seem to believe that this is common knowledge among game theorists, but I don’t think I’ve ever encountered that.
Jacob and I have already considered payout correlation, and I agree that it has some desirable properties. However,
it’s symmetric across players,
it’s invariant to player rationality
which matters, since alignment seems to not just be a function of incentives, but of what-actually-happens and how that affects different players
it equally weights each outcome in the normal-form game, ignoring relevant local dynamics. For example, what if part of the game table is zero-sum, and part is common-payoff? Correlation then can be controlled by zero-sum outcomes which are strictly dominated for all players. For example:
1 / 1 || 2 / 2 -.5 / .5 || 1 / 1
and so I don’t think it’s a slam-dunk solution. At the very least, it would require significant support.
You’re simply incorrect (or describing a different payout matrix than you state) that a player doesn’t “have to select a best response”.
Why? I suppose it’s common to assume (a kind of local) rationality for each player, but I’m not interested in assuming that here. It may be easier to analyze the best-response case as a first start, though.
It’s a definitional thing. The definition of utility is “the thing people maximize.” If you set up your 2x2 game to have utilities in the payout matrix, then by definition both actors will attempt to pick the box with the biggest number. If you set up your 2x2 game with direct payouts from the game that don’t include phychic (eg “I just like picking the first option given”) or reputational effects, then any concept of alignment is one of:
assume the players are trying for the biggest number, how much will they be attempting to land on the same box?
alignment is completely outside of the game, and is one of the features of function that converts game payouts to global utility
You seem to be muddling those two, and wondering “how much will people attempt to land on the same box, taking into account all factors, but only defining the boxes in terms of game payouts.” The answer there is “you can’t.” Because people (and computer programs) have wonky screwed up utility functions (eg (spoiler alert) https://en.wikipedia.org/wiki/Man_of_the_Year_(2006_film))
The definition of utility is “the thing people maximize.”
Only applicable if you’re assuming the players are VNM-rational over outcome lotteries, which I’m not. Forget expected utility maximization.
It seems to me that people are making the question more complicated than it has to be, by projecting their assumptions about what a “game” is. We have payoff numbers describing how “good” each outcome is to each player. We have the strategy spaces, and the possible outcomes of the game. And here’s one approach: fix two response functions in this game, which are functions from strategy profiles to the player’s response strategy. With respect to the payoffs, how “aligned” are these response functions with each other?
This doesn’t make restrictive rationality assumptions. It doesn’t require getting into strange utility assumptions. Most importantly, it’s a clearly-defined question whose answer is both important and not conceptually obvious to me.
(And now that I think of it, I suppose that depending on your response functions, even in zero-sum games, you could have “A aligned with B”, or “B aligned with A”, but not both.)
> The definition of utility is “the thing people maximize.”
Only applicable if you’re assuming the players are VNM-rational over outcome lotteries, which I’m not. Forget expected utility maximization.
Then what’s the definition / interpretation of “payoff”, i.e. the numbers you put in the matrix? If they’re not utilities, are they preferences? How can they be preferences if agents can “choose” not to follow them? Where do the numbers come from?
Note that Vanessa’s answer doesn’t need to depend on uB, which I think is its main strength and the reason it makes intuitive sense. (And I like the answer much less when uB is used to impose constraints.)
I think I’ve been unclear in my own terminology, in part because I’m uncertain about what other people have meant by ‘utility’ (what you’d recover from perfect IRL / Savage’s theorem, or cardinal representation of preferences over outcomes?) My stance is that they’re utilities but that I’m not assuming the players are playing best responses in order to maximize expected utility.
How can they be preferences if agents can “choose” not to follow them?
Am I allowed to have preferences without knowing how to maximize those preferences, or while being irrational at times? Boltzmann-rational agents have preferences, don’t they? These debates have surprised me; I didn’t think that others tied together “has preferences” and “acts rationally with respect to those preferences.”
There’s a difference between “the agent sometimes makes mistakes in getting what it wants” and “the agent does the literal opposite of what it wants”; in the latter case you have to wonder what the word “wants” even means any more.
My understanding is that you want to include cases like “it’s a fixed-sum game, but agent B decides to be maximally aligned / cooperative and do whatever maximizes A’s utility”, and in that case I start to question what exactly B’s utility function meant in the first place.
I’m told that Minimal Rationality addresses this sort of position, where you allow the agent to make mistakes, but don’t allow it to be e.g. literally pessimal since at that point you have lost the meaning of the word “preference”.
(I kind of also want to take the more radical position where when talking about abstract agents the only meaning of preferences is “revealed preferences”, and then in the special case of humans we also see this totally different thing of “stated preferences” that operates at some totally different layer of abstraction and where talking about “making mistakes in achieving your preferences” makes sense in a way that it does not for revealed preferences. But I don’t think you need to take this position to object to the way it sounds like you’re using the term here.)
Hm. At first glance this feels like a “1” game to me, if they both use the “take the strictly dominant action” solution concept. The alignment changes if they make decisions differently, but under the standard rationality assumptions, it feels like a perfectly aligned game.
Correlation between outcomes, not within them.
If both players prefer to be in the same box, they are aligned. As we add indifference and opposing choices, they become unalienable.
In your example, both people have the exact same ordering of outcome. In a classic PD, there is some mix.
Totally unaligned (constant value) example:
0/2 2⁄0
2/0 0⁄2
The usual Pearson correlation in particular is also insensitive to positive affine transformations of either player’s utility, so seems to be about the right thing, doesn’t just try to check if the incomparable utility values are equal.
I think this is backward. The game’s payout matrix determines the alignment. Fixed-sum games imply (in the mathematical sense) unaligned players, and common-payoff games ARE the definition of alignment.
When you start looking at meta-games (where resource payoffs differ from utility payoffs, based on agent goals), then “alignment” starts to make sense as a distinct measurement—it’s how much the players’ utility functions transform the payoffs (in the sub-games of a series, and in the overall game) from fixed-sum to common-payoff.
I don’t follow. How can fixed-sum games mathematically imply unaligned players, without a formal metric of alignment between the players?
Also, the payout matrix need not determine the alignment, since each player could have a different policy from strategy profiles to responses, which in principle doesn’t have to select a best response. For example, imagine playing stag hunt with someone who responds ‘hare’ to stag/stag; this isn’t a best response for them, but it minimizes your payoff. However, another partner could respond ‘stag’ to stag/stag, which (I think) makes them “less unaligned with you” with you than the partner who responds ‘hare’ to stag/stag.
Payout correlation IS the metric of alignment. A player who isn’t trying to maximize their (utility) payout is actually not playing the game you’ve defined. You’re simply incorrect (or describing a different payout matrix than you state) that a player doesn’t “have to select a best response”.
I think “X and Y are playing a game of stag hunt” has multiple meanings.
The meaning generally assumed in game theory when considering just a single game is that the outcomes in the game matrix are utilities. In that case, I completely agree with Dagon: if on some occasion you prefer to pick “hare” even though you know I will pick “stag”, then we are not actually playing the stag hunt game. (Because part of what it means to be playing stag hunt rather than some other game is that we both consider (stag,stag) the best outcome.)
But there are some other situations that might be described by saying that X and Y are playing stag hunt.
Maybe we are playing an iterated stag hunt. Then (by definition) what I care about is still some sort of aggregation of per-round outcomes, and (by definition) each round’s outcome still has (stag,stag) best for me, etc. -- but now I need to strategize over the whole course of the game, and e.g. maybe I think that on a particular occasion choosing “hare” when you chose “stag” will make you understand that you’re being punished for a previous choice of “hare” and make you more likely to choose “stag” in future.
Or maybe we’re playing an iterated iterated stag hunt. Now maybe I choose “hare” when you chose “stag”, knowing that it will make things worse for me over subsequent rounds, but hoping that other people looking at our interactions will learn the rule Don’t Fuck With Gareth and never, ever choose anything other than “stag” when playing with me.
Or maybe we’re playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we’re in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases—but maybe I’m a billionaire and literally don’t care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you’re a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I’m actually evil and want you to do as badly as possible.
In the iterated cases, it seems to me that the payout matrix still determines alignment given the iteration context—how many games, with what opponents, with what aggregation of per-round utilities to yield overall utility (in prospect or in retrospect; the former may involve temporal discounting too). If I don’t consider a long string of (stag,stag) games optimal then, again, we are not really playing (iterated) stag hunt.
In the payouts-aren’t-really-utilities case, I think it does make sense to ask about the players’ alignment, in terms of how they translate payouts into utilities. But … it feels to me as if this is now basically separate from the actual game itself: the thing we might want to map to a measure of alignedness is something like the function from (both players’ payouts) to (both players’ utilities). The choice of game may then affect how far unaligned players imply unaligned actions, though. (In a game with (cooperate,defect) options where “cooperate” is always much better for the player choosing it than “defect”, the payouts->utilities function would need to be badly anti-aligned, with players actively preferring to harm one another, in order to get uncooperative actions; in a prisoners’ dilemma, it suffices that it not be strongly aligned; each player can slightly prefer the other to do better but still choose defection.)
Thanks for the thoughtful response.
It seems to me like you’re assuming that players must respond rationally, or else they’re playing a different game, in some sense. But why? The stag hunt game is defined by a certain set of payoff inequalities holding in the game. Both players can consider (stag,stag) the best outcome, but that doesn’t mean they have to play stag against (stag, stag). That requires further rationality assumptions (which I don’t think are necessary in this case).
If I’m playing against someone who always defects against cooperate/cooperate, versus against someone who always cooperates against cooperate/cooperate, am I “not playing iterated PD” in one of those cases?
I’m not 100% sure I am understanding your terminology. What does it mean to “play stag against (stag,stag)” or to “defect against cooperate/cooperate”?
If your opponent is not in any sense a utility-maximizer then I don’t think it makes sense to talk about your opponent’s utilities, which means that it doesn’t make sense to have a payout matrix denominated in utility, which means that we are not in the situation of my second paragraph above (“The meaning generally assumed in game theory...”).
We might be in the situation of my last-but-two paragraph (“Or maybe we’re playing a game in which...”): the payouts might be something other than utilities. Dollars, perhaps, or just numbers written on a piece of paper. In that case, all the things I said about that situation apply here. In particular, I agree that it’s then reasonable to ask “how aligned is B with A’s interests?”, but I think this question is largely decoupled from the specific game and is more about the mapping from (A’s payout, B’s payout) to (A’s utility, B’s utility).
I guess there are cases where that isn’t enough, where A’s and/or B’s utility is not a function of the payouts alone. Maybe A just likes saying the word “defect”. Maybe B likes to be seen as the sort of person who cooperates. Etc. But at this point it feels to me as if we’ve left behind most of the simplicity and elegance that we might have hoped to bring by adopting the “two-player game in normal form” formalism in the first place, and if you’re prepared to consider scenarios where A just likes choosing the top-left cell in a 2x2 array then you also need to consider ones like the ones I described earlier in this paragraph—where in fact it’s not just the 2x2 payout matrix that matters but potentially any arbitrary details about what words are used when playing the game, or who is watching, or anything else. So if you’re trying to get to the essence of alignment by considering simple 2x2 games, I think it would be best to leave that sort of thing out of it, and in that case my feeling is that your options are (a) to treat the payouts as actual utilities (in which case, once again, I agree with Dagon and think all the alignment information is in the payout matrix), or (b) to treat them as mere utility-function-fodder, but to assume that they’re all the fodder the utility functions get (in which case, as above, I think none of the alignment information is in the payout matrix and it’s all in the payouts-to-utilities mapping), or (c) to consider some sort of iterated-game setup (in which case, I think you need to nail down what sort of iterated-game setup before asking how to get a measure of alignment out of it).
Let πi(σ)=σ′i be player i’s response function to strategy profile σ. Given some strategy profile (like stag/stag), player i selects a response. I mean “response” in terms of “best response”—I don’t necessarily mean that there’s an iterated game. This captures all the relevant “outside details” for how decisions are made.
I don’t think I understand where this viewpoint is coming from. I’m not equating payoffs with VNM-utility, and I don’t think game theory usually does either—for example, the maxmin payoff solution concept does not involve VNM-rational expected utility maximization. I just identify payoffs with “how good is this outcome for the player”, without also demanding that πi always select a best response. Maybe it’s Boltzmann rational, or maybe it just always selects certain actions (regardless of their expected payouts).
There exist two payoff functions. I think I want to know how impact-aligned one player is with another: how do the player’s actual actions affect the other player (in terms of their numerical payoff values). I think (c) is closest to what I’m considering, but in terms of response functions—not actual iterated games.
Sorry, I’m guessing this probably still isn’t clear, but this is the reply I have time to type right now and I figured I’d send it rather than nothing.
Sorry, I think I wasn’t clear about what I don’t understand. What is a “strategy profile (like stag/stag)”? So far as I can tell, the usual meaning of “strategy profile” is the same as that of “strategy”, and a strategy in a one-shot game of stag hunt looks like “stag” or “hare”, or maybe “70% stag, 30% hare”; I don’t understand what “stag/stag” means here.
----
It is absolutely standard in game theory to equate payoffs with utilities. That doesn’t mean that you have to do the same, of course, but I’m sure that’s why Dagon said what he did and it’s why when I was enumerating possible interpretations that was the first one I mentioned.
(The next several paragraphs are just giving some evidence for this; I had a look on my shelves and described what I found. Most detail is given for the one book that’s specifically about formalized 2-player game theory.)
“Two-Person Game Theory” by Rapoport, which happens to be the only book dedicated to this topic I have on my shelves, says this at the start of chapter 2 (titled “Utilities”):
Unfortunately, Rapoport is using the word “payoffs” to mean two different things here. I think it’s entirely clear from context, though, that his actual meaning is: you may begin by specifying monetary payoffs, but what we care about for game theory is payoffs as utilities. Here’s more from a little later in the chapter:
A bit later:
and:
As I say, that’s the only book of formal game theory on my shelves. Schelling’s Strategy of Conflict has a little to say about such games, but not much and not in much detail, but it looks to me as if he assumes payoffs are utilities. The following sentence is informative, though it presupposes rather than stating: “But what configuration of value systems for the two participants—of the “payoffs”, in the language of game theory—makes a deterrent threat credible?” (This is from the chapter entitled “International Strategy”; in my copy it’s on page 13.)
Rapoport’s “Strategy and Conscience” isn’t a book of formal game theory, but it does discuss the topic, and it explicitly says: payoffs are utilities.
One chapter in Schelling’s “Choice and Consequence” is concerned with this sort of game theory; he says that the numbers you put in the matrix are either arbitrary things whose relative ordering is the only thing that matters, or numbers that behave like utilities in the sense that the players are trying to maximize their expectations.
The Wikipedia article on game theory says: “The payoffs of the game are generally taken to represent the utility of individual players.” (This is in the section about the use of game theory in economics and business. It does also mention applications in evolutionary biology, where the payoffs are fitnesses—which seem to me very closely analogous to utilities, in that what the evolutionary process stochastically maximizes is something like expected fitness.)
Again, I don’t claim that you have to equate payoffs with utilities; you can apply the formalism of game theory in any way you please! But I don’t think there’s any question that this is the usual way in which payoffs in a game matrix are understood.
----
It feels odd to me to focus on response functions, since as a matter of fact you never actually know the other player’s strategy. (Aside from special cases where your opponent is sufficiently deterministic and sufficiently simple that you can “read their source code” and make reliable predictions from it. There’s a bit of an LW tradition of thinking in those terms, but I think that with the possible exception of reasoning along the lines of “X is an exact copy of me and will therefore make the same decisions as I do” it’s basically never going to be relevant to real decision-making agents because the usual case is that the other player is about as complicated as you are, and you don’t have enough brainpower to understand your own brain completely.)
If you are not considering payouts to be utilities, then you need to note that knowing the other player’s payouts—which is a crucial part of playing this sort of game—doesn’t tell you anything until you also know how those payouts correspond to utilities, or to whatever else the other player might use to guide their decision-making.
(If you aren’t considering that they’re utilities but are assuming that higher is better, then for many purposes that’s enough. But, again, only if you suppose that the other player does actually act as someone would act who prefers higher payouts to lower ones.)
My feeling is that you will get most insight by adopting (what I claim to be) the standard perspective where payoffs are utilities; then, if you want to try to measure alignment, the payoff matrix is the input for your calculation. Obviously this won’t work if one or both players behave in a way not describable by any utility function, but my suspicion is that in such cases you shouldn’t necessarily expect there to be any sort of meaningful measure of how aligned the players are.
Quote: Or maybe we’re playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we’re in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases—but maybe I’m a billionaire and literally don’t care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you’re a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I’m actually evil and want you to do as badly as possible.
So, if the other player is “always cooperate” or “always defect” or any other method of determining results that doesn’t correspond to the payouts in the matrix shown to you, then you aren’t playing “prisoner’s dillema” because the utilities to player B are not dependent on what you do. In all these games, you should pick your strategy based on how you expect your counterparty to act, which might or might not include the “in game” incentives as influencers of their behavior.
Here is the definition of a normal-form game:
You are playing prisoner’s dilemma when certain payoff inequalities are satisfied in the normal-form representation. That’s it. There is no canonical assumption that players are expected utility maximizers, or expected payoff maximizers.
Noting that I don’t follow what you mean by this: do you mean to say that player B’s response cannot be a constant function of strategy profiles (ie the response function cannot be constant everywhere)?
Um… the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.
Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a “stag hunt” or “prisoner’s dillema” or “dating game”
No. In that article, the only spot where ‘utility’ appears is identifying utility with the player’s payoffs/payouts. (EDIT: but perhaps I don’t get what you mean by ‘in-game payouts’?)
To reiterate: I’m not talking about VNM-utility, derived by taking a preference ordering-over-lotteries and back out a coherent utility function. I’m talking about the players having payoff functions which cardinally represent the value of different outcomes. We can call the value-units “squiggles”, or “utilons”, or “payouts”; the OP’s question remains.
No, I don’t agree with that.
Do you have a citation? You seem to believe that this is common knowledge among game theorists, but I don’t think I’ve ever encountered that.
Jacob and I have already considered payout correlation, and I agree that it has some desirable properties. However,
it’s symmetric across players,
it’s invariant to player rationality
which matters, since alignment seems to not just be a function of incentives, but of what-actually-happens and how that affects different players
it equally weights each outcome in the normal-form game, ignoring relevant local dynamics. For example, what if part of the game table is zero-sum, and part is common-payoff? Correlation then can be controlled by zero-sum outcomes which are strictly dominated for all players. For example:
1 / 1 || 2 / 2
-.5 / .5 || 1 / 1
and so I don’t think it’s a slam-dunk solution. At the very least, it would require significant support.
Why? I suppose it’s common to assume (a kind of local) rationality for each player, but I’m not interested in assuming that here. It may be easier to analyze the best-response case as a first start, though.
It’s a definitional thing. The definition of utility is “the thing people maximize.” If you set up your 2x2 game to have utilities in the payout matrix, then by definition both actors will attempt to pick the box with the biggest number. If you set up your 2x2 game with direct payouts from the game that don’t include phychic (eg “I just like picking the first option given”) or reputational effects, then any concept of alignment is one of:
assume the players are trying for the biggest number, how much will they be attempting to land on the same box?
alignment is completely outside of the game, and is one of the features of function that converts game payouts to global utility
You seem to be muddling those two, and wondering “how much will people attempt to land on the same box, taking into account all factors, but only defining the boxes in terms of game payouts.” The answer there is “you can’t.” Because people (and computer programs) have wonky screwed up utility functions (eg (spoiler alert) https://en.wikipedia.org/wiki/Man_of_the_Year_(2006_film))
Only applicable if you’re assuming the players are VNM-rational over outcome lotteries, which I’m not. Forget expected utility maximization.
It seems to me that people are making the question more complicated than it has to be, by projecting their assumptions about what a “game” is. We have payoff numbers describing how “good” each outcome is to each player. We have the strategy spaces, and the possible outcomes of the game. And here’s one approach: fix two response functions in this game, which are functions from strategy profiles to the player’s response strategy. With respect to the payoffs, how “aligned” are these response functions with each other?
This doesn’t make restrictive rationality assumptions. It doesn’t require getting into strange utility assumptions. Most importantly, it’s a clearly-defined question whose answer is both important and not conceptually obvious to me.
(And now that I think of it, I suppose that depending on your response functions, even in zero-sum games, you could have “A aligned with B”, or “B aligned with A”, but not both.)
Then what’s the definition / interpretation of “payoff”, i.e. the numbers you put in the matrix? If they’re not utilities, are they preferences? How can they be preferences if agents can “choose” not to follow them? Where do the numbers come from?
Note that Vanessa’s answer doesn’t need to depend on uB, which I think is its main strength and the reason it makes intuitive sense. (And I like the answer much less when uB is used to impose constraints.)
I think I’ve been unclear in my own terminology, in part because I’m uncertain about what other people have meant by ‘utility’ (what you’d recover from perfect IRL / Savage’s theorem, or cardinal representation of preferences over outcomes?) My stance is that they’re utilities but that I’m not assuming the players are playing best responses in order to maximize expected utility.
Am I allowed to have preferences without knowing how to maximize those preferences, or while being irrational at times? Boltzmann-rational agents have preferences, don’t they? These debates have surprised me; I didn’t think that others tied together “has preferences” and “acts rationally with respect to those preferences.”
There’s a difference between “the agent sometimes makes mistakes in getting what it wants” and “the agent does the literal opposite of what it wants”; in the latter case you have to wonder what the word “wants” even means any more.
My understanding is that you want to include cases like “it’s a fixed-sum game, but agent B decides to be maximally aligned / cooperative and do whatever maximizes A’s utility”, and in that case I start to question what exactly B’s utility function meant in the first place.
I’m told that Minimal Rationality addresses this sort of position, where you allow the agent to make mistakes, but don’t allow it to be e.g. literally pessimal since at that point you have lost the meaning of the word “preference”.
(I kind of also want to take the more radical position where when talking about abstract agents the only meaning of preferences is “revealed preferences”, and then in the special case of humans we also see this totally different thing of “stated preferences” that operates at some totally different layer of abstraction and where talking about “making mistakes in achieving your preferences” makes sense in a way that it does not for revealed preferences. But I don’t think you need to take this position to object to the way it sounds like you’re using the term here.)
Tabooing “aligned” what property are you trying to map on a scale of “constant sum” to “common payoff”?
Good question. I don’t have a crisp answer (part of why this is an open question), but I’ll try a few responses:
To what degree does player 1′s actions further the interests of player 2 within this normal form game, and vice versa?
This version requires specific response functions.
To what degree do the interests of players 1 and 2 coincide within a normal form game?
This feels more like correlation of the payout functions, represented as vectors.
So, given this payoff matrix (where P1 picks a row and gets the first payout, P2 picks column and gets 2nd payout):
5 / 0 ; 5 / 100
0 / 100 ; 0 / 1
Would you say P1′s action furthers the interest of player 2?
Would P2′s action further the interest of player 1?
Where would you rank this game on the 0 − 1 scale?
Hm. At first glance this feels like a “1” game to me, if they both use the “take the strictly dominant action” solution concept. The alignment changes if they make decisions differently, but under the standard rationality assumptions, it feels like a perfectly aligned game.
Correlation between outcomes, not within them. If both players prefer to be in the same box, they are aligned. As we add indifference and opposing choices, they become unalienable. In your example, both people have the exact same ordering of outcome. In a classic PD, there is some mix. Totally unaligned (constant value) example: 0/2 2⁄0 2/0 0⁄2
The usual Pearson correlation in particular is also insensitive to positive affine transformations of either player’s utility, so seems to be about the right thing, doesn’t just try to check if the incomparable utility values are equal.