Conflict vs. mistake in non-zero-sum games
Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.
Disclaimer: I don’t really know game theory.
Plot the payoffs in a non-zero-sum two-player game, and you’ll get a convex[1] set with the Pareto frontier on the top and right:
You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2. In this illustration, the level sets for surplus and allocation are depicted by concentric curves and radial lines, respectively.
It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.
Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:
-
“Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
-
“Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”
I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.
Indeed, the mistake theory strategy pushes the obviously good plan of making things better off for everyone. It can frame all opposition as making the mistake of leaving surplus on the table.
The conflict theory strategy threatens to destroy surplus in order to get a more favorable allocation. Its narrative emphasizes the fact that the players can’t maximize their rewards simultaneously.
Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.
If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.
It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)
This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.
But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.
The reason I’m thinking about this is that I want a theory of non-zero-sum games involving counterfactual reasoning and superrationality. It’s not clear to me what superrational agents “should” do in general non-zero-sum games.
I liked this article. It presents a novel view on mistake theory vs conflict theory, and a novel view on bargaining.
However, I found the definitions and arguments a bit confusing/inadequate.
Your definitions:
The wording of the options was quite confusing to me, because it’s not immediately clear what “doing something first” and “doing some other thing second” really means.
For example, the original Nash bargaining game works like this:
First, everyone simultaneously names their threats. This determines the BATNA (best alternative to negotiated agreement), usually drawn as the origin of the diagram. (Your story assumes a fixed origin is given, in order to make “allocation” and “surplus” well-defined. So you are not addressing this step in the bargaining process. This is a common simplification; EG, the Nash bargaining solution also does not address how the BATNA is chosen.)
Second, everyone simultaneously makes demands, IE they state what minimal utility they want in order to accept a deal.
If everyone’s demands are mutually compatible, everyone gets the utility they demanded (and no more). Otherwise, negotiations break down and everyone plays their BATNA instead.
In the sequential game, threats are “first” and demands are “second”. However, because of backward-induction, this means people usually solve the game by solving the demand strategies first and then selecting threats. Once you know how people are going to make demands (once threats are visible), then you know how to strategize about the first step of play.
And, in fact, analysis of the strategy for the Nash bargaining game has focused on the demands step, almost to the exclusion of the threats step.
So, if we represent bargaining as any sequential game (be it the Nash game above, or some other), then order of play is always the opposite of the order in which we think about things.
So when you say:
I came up with two very different interpretations:
Let’s arrange our bargaining rules so that we select surplus quantity first, and then select allocation after that. This way of setting up the rules actually focuses our attention first on how we would choose allocation, given different surplus choices (if we reason about the game by backwards induction), therefore focusing our decision-making on allocation, and making our choice of surplus a more trivial consequence of how we reason about allocation strategies.
Let’s arrange our bargaining rules so that we select allocation first, and only after that, decide on surplus. This way of setting up the rules focuses on maximizing surplus, because hopefully no matter which allocation we choose, we will then be able to agree to maximize surplus. (This is true so long as everyone reasons by backward-induction rather than using UDT.)
The text following these definitions seemed to assume the definitions were already clear, so, didn’t provide any immediate help clearing up the intended definitions. I had to get all the way to the end of the article to see the overall argument and then think about which you meant.
Your argument seems mostly consistent with “mistake theory = allocation first”, focusing negotiations on good surplus, possibly at the expense of allocation. However, you also say the following, which suggests the exact opposite:
In the end, I settled on yet-different interpretation of your definition. A mistake theorist believes: maximizing surplus is the more important of the two concerns. Determining allocation is of secondary importance. And a conflict theorist believes the opposite.
This makes you most straightforwardly correct about what mistake theorists and conflict theorists want. Mistake theorists focus on the common good. Conflict theorists focus on the relative size of their cut of the pie.
A quick implication of my definition is that you’ll tend to be a conflict theorist if you think the space of possible outcomes is all relatively close to the pareto frontier. (IE, if you think the game is close to a zero-sum one.) You’ll be a mistake theorist if you think there is a wide variation in how close to the pareto frontier different solutions are. (EG, if you think there’s a lot to be gained, or a lot to lose, for everyone.)
On my theory, mistake theorists will be happy to discuss allocations first, because this virtually guarantees that afterward everyone will agree on the maximum surplus for the chosen allocation. The unsavory mistake theorists you describe are either making a mistake, or being devious (and therefore, sound like secret conflict theorists, tho really it’s not a black and white thing).
On the other hand, your housing example is an example where there’s first a precommitment about allocation, but the picture for agreeing on a high surplus afterward don’t seem so good.
I think this is partly because the backward-induction assumption isn’t a very good one for humans, who use UDT-like obstinance at times. It’s also worth mentioning that choosing between “surplus first” and “allocation first” bargaining isn’t a very rich set of choices. Realistically there can be a lot more going on, such that I guess mistake theorists can end up preferring to try to agree on pareto-efficiency first or trying to sort out allocations first, depending on the complexities of the situation.
These ordering issues seem very confusing to think about, and it seems better to focus on perceived relative importance of allocation vs surplus, instead.
I found this review helpful for getting me… “properly confused” about how negotiation games work.