I think Vanessa is right. You’re looking for a term to describe games where threats or cooperation are possible. The term for such games is non-zero-sum.
There are two kinds of games: zero-sum (or fixed-sum), where the sum of payoffs to players is always the same regardless of what they do. And non-zero-sum (or variable-sum), where the sum can vary based on what the players do. In the first kind of game, threats or cooperation don’t exist, because anything that helps one player automatically hurts the other by the same amount. In the second kind, threats and cooperation are possible, e.g. there can be a button that nukes both players, which represents a threat and an opportunity for cooperation.
Calling a game “zero or negative sum” is just confusing the issue. You can give everyone a ton of money unconditionally, making the game “positive sum”, and the strategic picture of the game won’t change at all. The strategic feature you’re interested in isn’t the sign, but the variability of the sum, which is known as “non-zero-sum”.
If you’re thinking about strategic behavior, an LWish folk knowledge of prisoner’s dilemmas and such is really not much to go on. Going through a textbook of game theory and solving exercises would be literally the best time investment. My favorite one is Ken Binmore’s “Fun and Games”, it’s on my desk now. An updated version, “Playing for Real”, can be downloaded for free.
Hmm. So it sounds like I’d passively-absorbed some wrong-terminology/concepts. What I’m currently unclear on is whether the wrong-terminology I absorbed was conceptually false or just “not what game theorists mean by the term.”
A remaining confusion (feel free to say “just read the book” rather than explaining this. I’ll check out Playing for Real when I get some time)
Why there can’t be threats or cooperation in a game that is zero-sum, but with more than 2 players. (i.e. my conception is that pecking order status is zero sum, or at least many people think it is, and the thing at stake in the OP was Erica/Frank’s conception of their relative status. If they destroy each other’s reputations, two other people in the company would rise above them. If they cooperate and both gain, this necessarily comes at the expense of other players. Why doesn’t this leave room for threats and cooperation?)
You’re right, if there are more than 2 players, there can be threats and cooperation. However, a lot of the time when game theorists talk about zero-sum games, they talk specifically about two-player zero-sum games (because, these games have special properties e.g. the minimax theorem).
Okay, well if the intent is as described above (game is zero sum, more than 2 players), what changes to the text would you recommend? (I suppose the “and therefore...” is probably the most wrong part?)
I notice some confusion around past LW discussion, which has often used the phrase “positive sum”. (examples including Beyond the Reach of God and Winning is For Losers). If “positive sum games” isn’t really a thing I’d have expected to run into pushback about that at some point. (this seems consistent with “LW Folk Game Theory isn’t necessarily real game theory”, but if the distinction is important might be good to flag it other places it’s come up)
The thing I’m actually trying to contrast here is “the sort of strategic landscape, and orientation, where the thing to do is to fight over who wins social points, vs the sort of strategic landscape that encourages building something together.” (where “fighting over who gets social points can still involve cooperation, but they it’s “allies at war” style cooperation that are dividing up spoils, rather than creating spoils)
Still interested in concrete suggestions on how to change the wording.
LW Folk Game Theory is in fact not real game theory. The key difference is that LW Folk Game Theory tends to assume that positive utility corresponds to “I would choose this over nothing” while negative utility corresponds to “I would choose nothing over this”, and 0 utility is the indifference point.
Real Game Theory does not make such an assumption. In real game theory, you take actions that maximize your (expected) utility. Importantly, if you just add a constant to your utility function (for every possible action / outcome), then the maximizing action is not going to change—there’s no concept of “0 is the indifference point”. So, if there are two outcomes o1,o2 that can be achieved, and no others, then the utility function U1={o1:−5,o2:−3} is identical to U2={o1:5,o2:7}. In LW Folk Game Theory, “doing nothing” is usually an action and is assigned 0 utility by convention, which prevents this from happening.
If “positive sum games” isn’t really a thing I’d have expected to run into pushback about that at some point.
Consider a two player game where for any outcome o, U1(o)+U2(o)=5. Sure sounds like a positive-sum game, right? Well, by the argument above, I can replace U2 with U′2=U2−5 and the game remains exactly the same. And now we have U1(o)+U′2(o)=0, that is, for every observation U′2(o)=−U1(o) , i.e. we’re in a zero-sum game.
As cousin_it said, really they shouldn’t be called zero-sum games, they should be called fixed-sum or constant-sum games. Two player constant-sum games are perfectly competitive, and as a result there are no threats: anything that hurts the other player helps you in exactly the same amount, and so you do it.
(As you note, if there are more than 2 players, you can get things like threats and collaboration, e.g. the weaker two players collaborate to overthrow the stronger one.)
Re: expecting pushback, I generally don’t expect LW terminology to agree particularly well with academia. The goals are different, and the terminology reflects this. LW wants to be able to compare everything to “nothing happened”, so there’s a convention that nothing happens = 0 utility. Real game theory doesn’t want to make that comparison, it prefers to have elegance and fewer assumptions.
LW “positive-sum games” means “both players are better off than if they did nothing”, or at least “one of the players is better off by an amount greater than the amount the other player is worse off”. Similarly for “negative-sum games”. This is fundamentally about comparing to “nothing happens”. Real game theory doesn’t care, it is all about action selection; and many games don’t have a “nothing happens” option. (See e.g. prisoner’s dilemma, where you must cooperate or defect, you can’t choose to leave the game.)
The thing I’m actually trying to contrast here is “the sort of strategic landscape, and orientation, where the thing to do is to fight over who wins social points, vs the sort of strategic landscape that encourages building something together.”
I usually call this competitive vs. collaborative, and games / strategies can be on a spectrum between competitive and collaborative. The maximally competitive games are two player zero sum games. The maximally collaborative games are common payoff games (where all players have the same utility function). Other games fall in between.
(where “fighting over who gets social points can still involve cooperation, but they it’s “allies at war” style cooperation that are dividing up spoils, rather than creating spoils)
Here it seems like there is both a collaborative aspect (maximizing the amount of spoils that can be shared between the two) and a competitive aspect (getting the largest fraction of the available spoils).
More generally, if you want to talk in an informed way about any science topic that’s covered on LW (game theory, probability theory, computational complexity, mathematical logic, quantum mechanics, evolutionary biology, economics...) and you haven’t read some conventional teaching materials and done at least a few conventional exercises, there’s a high chance you’ll be kidding yourself and others. Eliezer gives an impression of getting away with it, but a) he does read stuff and solve stuff, b) cutting corners has burned him a few times.
Pages 4-5 of my edition of my copy of The Strategy of Conflict define two terms:
Pure Conflict: In which the goals of the players are opposed completely (as in Eliezer’s “The True Prisoner’s Dilemma”)
Bargaining: In which the goals of the players are somehow aligned so that making trades is better for everyone
Schelling goes on to argue (again, just on page 5) that most “Pure Conflicts” are actually not, and that people can do better by bargaining instead. Then, he creates a spectrum from Conflict games to Bargaining games, setting the stage for the framework the book is written from.
[edit-in-under-5-minutes: Note that, even in the Eliezer article I posted above, we can see that super typical conflicts STILL benefit from bargaining. Some people informally make the distinction between “dividing up a pie between 2 people and working together to make more pies”, and you clearly can see how you can “make more pies” in a PD.]
I’m pretty unhappy with the subthread talking about how wrong LessWrong Folk Game Theory is and how Game Theory doesn’t use these topics. One of the big base-level Game Theory books takes the first few pages to write about the term you wanted, and I feel everyone could have looked around more before writing off your question as ignorant.
Schelling actually uses the term “zero sum game” repeatedly in his essay “Toward a Theory of Interdependent Decision”, even explicitly equating it to “pure conflict”. This essay starts on page 83 of my copy of the book.
I only realized this after my comment while flipping through, so I was going to leave it off, but it’s been driving me mad for a few days since it significantly strengthens my above argument and explains why I find the derision in the replies so annoying.
Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?
Ah, okay. Does simply changing the phrasing to “zero or negative sum” fix the issue?
I think Vanessa is right. You’re looking for a term to describe games where threats or cooperation are possible. The term for such games is non-zero-sum.
There are two kinds of games: zero-sum (or fixed-sum), where the sum of payoffs to players is always the same regardless of what they do. And non-zero-sum (or variable-sum), where the sum can vary based on what the players do. In the first kind of game, threats or cooperation don’t exist, because anything that helps one player automatically hurts the other by the same amount. In the second kind, threats and cooperation are possible, e.g. there can be a button that nukes both players, which represents a threat and an opportunity for cooperation.
Calling a game “zero or negative sum” is just confusing the issue. You can give everyone a ton of money unconditionally, making the game “positive sum”, and the strategic picture of the game won’t change at all. The strategic feature you’re interested in isn’t the sign, but the variability of the sum, which is known as “non-zero-sum”.
If you’re thinking about strategic behavior, an LWish folk knowledge of prisoner’s dilemmas and such is really not much to go on. Going through a textbook of game theory and solving exercises would be literally the best time investment. My favorite one is Ken Binmore’s “Fun and Games”, it’s on my desk now. An updated version, “Playing for Real”, can be downloaded for free.
Hmm. So it sounds like I’d passively-absorbed some wrong-terminology/concepts. What I’m currently unclear on is whether the wrong-terminology I absorbed was conceptually false or just “not what game theorists mean by the term.”
A remaining confusion (feel free to say “just read the book” rather than explaining this. I’ll check out Playing for Real when I get some time)
Why there can’t be threats or cooperation in a game that is zero-sum, but with more than 2 players. (i.e. my conception is that pecking order status is zero sum, or at least many people think it is, and the thing at stake in the OP was Erica/Frank’s conception of their relative status. If they destroy each other’s reputations, two other people in the company would rise above them. If they cooperate and both gain, this necessarily comes at the expense of other players. Why doesn’t this leave room for threats and cooperation?)
You’re right, if there are more than 2 players, there can be threats and cooperation. However, a lot of the time when game theorists talk about zero-sum games, they talk specifically about two-player zero-sum games (because, these games have special properties e.g. the minimax theorem).
Okay, well if the intent is as described above (game is zero sum, more than 2 players), what changes to the text would you recommend? (I suppose the “and therefore...” is probably the most wrong part?)
(thinking a bit more)
I notice some confusion around past LW discussion, which has often used the phrase “positive sum”. (examples including Beyond the Reach of God and Winning is For Losers). If “positive sum games” isn’t really a thing I’d have expected to run into pushback about that at some point. (this seems consistent with “LW Folk Game Theory isn’t necessarily real game theory”, but if the distinction is important might be good to flag it other places it’s come up)
The thing I’m actually trying to contrast here is “the sort of strategic landscape, and orientation, where the thing to do is to fight over who wins social points, vs the sort of strategic landscape that encourages building something together.” (where “fighting over who gets social points can still involve cooperation, but they it’s “allies at war” style cooperation that are dividing up spoils, rather than creating spoils)
Still interested in concrete suggestions on how to change the wording.
LW Folk Game Theory is in fact not real game theory. The key difference is that LW Folk Game Theory tends to assume that positive utility corresponds to “I would choose this over nothing” while negative utility corresponds to “I would choose nothing over this”, and 0 utility is the indifference point.
Real Game Theory does not make such an assumption. In real game theory, you take actions that maximize your (expected) utility. Importantly, if you just add a constant to your utility function (for every possible action / outcome), then the maximizing action is not going to change—there’s no concept of “0 is the indifference point”. So, if there are two outcomes o1,o2 that can be achieved, and no others, then the utility function U1={o1:−5,o2:−3} is identical to U2={o1:5,o2:7}. In LW Folk Game Theory, “doing nothing” is usually an action and is assigned 0 utility by convention, which prevents this from happening.
Consider a two player game where for any outcome o, U1(o)+U2(o)=5. Sure sounds like a positive-sum game, right? Well, by the argument above, I can replace U2 with U′2=U2−5 and the game remains exactly the same. And now we have U1(o)+U′2(o)=0, that is, for every observation U′2(o)=−U1(o) , i.e. we’re in a zero-sum game.
As cousin_it said, really they shouldn’t be called zero-sum games, they should be called fixed-sum or constant-sum games. Two player constant-sum games are perfectly competitive, and as a result there are no threats: anything that hurts the other player helps you in exactly the same amount, and so you do it.
(As you note, if there are more than 2 players, you can get things like threats and collaboration, e.g. the weaker two players collaborate to overthrow the stronger one.)
Re: expecting pushback, I generally don’t expect LW terminology to agree particularly well with academia. The goals are different, and the terminology reflects this. LW wants to be able to compare everything to “nothing happened”, so there’s a convention that nothing happens = 0 utility. Real game theory doesn’t want to make that comparison, it prefers to have elegance and fewer assumptions.
LW “positive-sum games” means “both players are better off than if they did nothing”, or at least “one of the players is better off by an amount greater than the amount the other player is worse off”. Similarly for “negative-sum games”. This is fundamentally about comparing to “nothing happens”. Real game theory doesn’t care, it is all about action selection; and many games don’t have a “nothing happens” option. (See e.g. prisoner’s dilemma, where you must cooperate or defect, you can’t choose to leave the game.)
I usually call this competitive vs. collaborative, and games / strategies can be on a spectrum between competitive and collaborative. The maximally competitive games are two player zero sum games. The maximally collaborative games are common payoff games (where all players have the same utility function). Other games fall in between.
Here it seems like there is both a collaborative aspect (maximizing the amount of spoils that can be shared between the two) and a competitive aspect (getting the largest fraction of the available spoils).
Seconding everything that Rohin said.
More generally, if you want to talk in an informed way about any science topic that’s covered on LW (game theory, probability theory, computational complexity, mathematical logic, quantum mechanics, evolutionary biology, economics...) and you haven’t read some conventional teaching materials and done at least a few conventional exercises, there’s a high chance you’ll be kidding yourself and others. Eliezer gives an impression of getting away with it, but a) he does read stuff and solve stuff, b) cutting corners has burned him a few times.
Pages 4-5 of my edition of my copy of The Strategy of Conflict define two terms:
Pure Conflict: In which the goals of the players are opposed completely (as in Eliezer’s “The True Prisoner’s Dilemma”)
Bargaining: In which the goals of the players are somehow aligned so that making trades is better for everyone
Schelling goes on to argue (again, just on page 5) that most “Pure Conflicts” are actually not, and that people can do better by bargaining instead. Then, he creates a spectrum from Conflict games to Bargaining games, setting the stage for the framework the book is written from.
[edit-in-under-5-minutes: Note that, even in the Eliezer article I posted above, we can see that super typical conflicts STILL benefit from bargaining. Some people informally make the distinction between “dividing up a pie between 2 people and working together to make more pies”, and you clearly can see how you can “make more pies” in a PD.]
I’m pretty unhappy with the subthread talking about how wrong LessWrong Folk Game Theory is and how Game Theory doesn’t use these topics. One of the big base-level Game Theory books takes the first few pages to write about the term you wanted, and I feel everyone could have looked around more before writing off your question as ignorant.
Schelling actually uses the term “zero sum game” repeatedly in his essay “Toward a Theory of Interdependent Decision”, even explicitly equating it to “pure conflict”. This essay starts on page 83 of my copy of the book.
I only realized this after my comment while flipping through, so I was going to leave it off, but it’s been driving me mad for a few days since it significantly strengthens my above argument and explains why I find the derision in the replies so annoying.
Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?