So, we could consider a game completely adversarial if it has a structure like this: no strategy profiles are a Pareto improvement over any others. In other words, the feasible outcomes of the game equal the game’s Pareto frontier. All possible outcomes involve trade-offs between players.
I must have missed some key word—by this definition, wouldn’t common-payoff games be “completely adversarial”, because the “feasible” outcomes equal the Pareto frontier under the usual assumptions?
As an example, I think in the game “both players win if they choose the same option, and lose if they pick different options” has “the two players pick different options, and lose” as one of the feasible outcomes, and it is not on the Pareto frontier, because if they picked the same thing, they would both win, and that would be a Pareto improvement.
How so? The common payoff game where you and I name a number and we both receive the sum of the numbers we name has a Pareto improvement on any strategy: we can always name higher numbers.
Maybe the confusion was the way I used “feasible”? Does it have a different definition in game theory? I stick by the first phrasing I used: a game is completely adversarial if no strategy profiles are Pareto over any others.
I read “feasible” as something like “rationalizable.” I think it would have been much clearer if you had said “if no strategy profiles are Pareto over any others.”
My game theory is a bit rusty but I remember the pareto frontier as referring to an equal overall utility condition while a pareto improvement requires no participant becoming worse off. In other words you can move along the frontier by negatively impacting other players (which means by not making pareto improvements). This situation makes the players adversaries because there are no longer, strictly speaking, benefits from cooperating.
You’re thinking of a Kaldor-Hicks optimality frontier for {outcomes with maximal total payoff}, while the Pareto frontier is {maximal elements in the unanimous-agreement preference ordering over outcomes}.
I must have missed some key word—by this definition, wouldn’t common-payoff games be “completely adversarial”, because the “feasible” outcomes equal the Pareto frontier under the usual assumptions?
As an example, I think in the game “both players win if they choose the same option, and lose if they pick different options” has “the two players pick different options, and lose” as one of the feasible outcomes, and it is not on the Pareto frontier, because if they picked the same thing, they would both win, and that would be a Pareto improvement.
Right, I understand how this correctly labels certain cases, but that doesn’t seem to address my question?
How so? The common payoff game where you and I name a number and we both receive the sum of the numbers we name has a Pareto improvement on any strategy: we can always name higher numbers.
Maybe the confusion was the way I used “feasible”? Does it have a different definition in game theory? I stick by the first phrasing I used: a game is completely adversarial if no strategy profiles are Pareto over any others.
I read “feasible” as something like “rationalizable.” I think it would have been much clearer if you had said “if no strategy profiles are Pareto over any others.”
My game theory is a bit rusty but I remember the pareto frontier as referring to an equal overall utility condition while a pareto improvement requires no participant becoming worse off. In other words you can move along the frontier by negatively impacting other players (which means by not making pareto improvements). This situation makes the players adversaries because there are no longer, strictly speaking, benefits from cooperating.
You’re thinking of a Kaldor-Hicks optimality frontier for {outcomes with maximal total payoff}, while the Pareto frontier is {maximal elements in the unanimous-agreement preference ordering over outcomes}.