Principals, agents, negotiation, and precommitments
I’m sure this observation has been made plenty of times before: a principal can gain negotiating power by delegating negotiations to an agent, and restricting that agent’s ability to negotiate.
For example: If I’m at a family-owned pizza joint, and I want a slice of pepperoni but all they’ve got is meat-lover’s, I can negotiate for the latter at the price of the former. This is a good deal with well-aligned incentives, and is likely to be accepted. But at a chain restaurant, the employees are not empowered to negotiate: It’s the menu prices or nothing. Since I’m aware of their lack of power, and my demand for pizza is not very elastic, I’m likely to give them the higher price.
If I squint, this looks a lot like a precommitment, on the part of the pizza store, not to negotiate prices. But if they explicitly made such a precommitment, it might turn off customers—nobody likes to feel like they’re getting a bad deal, and a statement of precommitment (e.g. a sign reading “all prices are final”) is likely to make customers feel marginally negative towards the business by drawing their attention to the money they aren’t saving.
By contrast, the corporate form—such as the chain store has—gives this kind of ‘precommitment’ as a side-effect of the otherwise socially-normal behavior of delegating limited responsibility to employees. Same benefit, but without the drawback, mostly because the practice is socially-accepted.
Is there any literature that covers this kind of thing further? Particularly the link between precommitment and agents with limited negotating ability.
(I am sitting in a chain pizza store as I write this. Guess what I wanted to order, and what I got instead?)
It’s pretty well standard applied game theory, I think A Strategy of Conflict talks about it specifically.
“The Strategy of Conflict” by Thomas C. Schelling. In Part II, “A Reorientation of Game Theory”, Chapter 5, “Enforcement, Communication, and Strategic Moves”, a half dozen subsections in is “Delegation”. Coincidentally enough it’s a section I read last night; I’m still only halfway through the book, so it was easy enough to look up the reference sitting right next to me. :-)
And it’s probably what gwillen is looking for. Until I read the sentence starting with “Is there any literature”, this post sounded like it was going to be the first in a series of “Cliffs’ Notes” for Schelling.
Hah, that is a perfect citation, thanks.
Yeah, I guess someone should make a “here’s what’s in The Strategy of Conflict” series of posts—I keep telling people to read that book :D
This article by Yvain is relevant.
That article is all about precommitments and ways to get people to violate them. (I read it previously and liked it.) The interesting thing about delegation is that the precommitment becomes totally inviolable, because the person who would be permitted to violate it is not even present at the negotiation.
The old classic about negotiations “Getting to Yes” covers it.
Ooh, I’ve heard of that before and it’s exactly the kind of practical reference that sounds worth reading. I should get that one.
This is a very powerful fact about cooperates. By deligating different authorities and by hiring people with different personalities into different departments a corporate can simultaneously be th kind of cooperative entity that cooperates on a one shot prisoners dilemma and the kind of greedy entity that can credibly claim to reject anything less than an 80-20 split in it’s favor in an ultimatum game.
You can transform one obviously evil entity into a functionally equivalent structure of N mini-entities with limited powers, where all the mini-entities can signal good intentions but are forbidden (by other parts and/or by the system) to act upon them.
It’s as if I modified my own source code to make me completely selfish, and then said to others: “Look, I am a nice person; I really feel with you, and I honestly would like to help you… but unfortunately I cannot, because I have this stupid source code which does not allow me to act this way.”
But if I did it this way, you would obviously ask me: “So if you are such a nice person, why did you modify your source code this way?”
But it works if my source code was written by someone else. People somehow don’t ask: “So if you are such a nice person, and the rules are bad, why did you agree to follow such bad rules?” Somehow we treat the choice of following some else’s rules as a morally neutral choice.
The excuse “I was just following orders” is pretty discredited these days.
For a Nazi before a war tribunal, yes.
For an employee who by following company orders makes the price negotiation more difficult for a customer, no.
The difference is probably based on price negotiation not being percieved as a moral problem. Thus the employee removes some of your possible utility, but he is not doing anything immoral. Following orders which are not considered immoral is still an acceptable excuse.
Well that sure can’t be an equilibrium of a completed timeless decision theory with reflective consistency. Your delegees are more powerful because they have fewer choices? Why wouldn’t you just rewrite your source code to eliminate those options? Why wouldn’t you just not do them? And why would the other agent react any differently to the delegate than to the source-code change or the decision in the moment?
Rewriting my source code is tricky; I always start to get dizzy from the blood loss before the saw is even halfway through my skull.
In hindsight, whoever gave my comment its initial “-1 point” ding was correct: although I thought “Why wouldn’t you just rewrite your source code” was a flippant question that doesn’t mean it deserved just a joking answer. So, some more serious answers:
Your delegates are more powerful because they are known to have fewer choices and because they are known to value those choices differently, which can prevent them from being subject to threats or affected by precommitments that might have been useful against you.
I wouldn’t rewrite my source code because, as I joked, I can’t.. but even if I could, doing so would only be effective if there were some way of also convincing other agents that I wasn’t deceiving them about my new source code. This may not be practical: for every program that does X when tested, returns source code for “do X” when requested, and does X in the real world, there exists another program which does X when tested, returns source code for “do X” when requested, and does Y in the real world. See the concern over electronic voting machines for a more contemporary example of the problem.
Whether I would just not do something is irrelevant—what matters is whether everyone interacting with me believes I will do it. It’s easier for a customer to believe that a cashier won’t exceed his authority than for a customer to believe that an owner won’t accept a still-mutually-beneficial bargain, even if the owner swears that he precommitted not to haggle.
Wild speculation: There are instances where evolution seems to have built “one-boxing” type adaptations into humanity, and in those cases we seem to find precommitment claims plausible. If someone is hurt badly enough then they may want revenge even if taking revenge hurts them further. If someone is treated generously enough then they may be generous in return despite not wanting anything further from their benefactor. A lot of the “irrational” emotions look a lot like rational precommitments from the right perspective. But if you find yourself wishing you could precommit in a situation where apes aren’t known for precommitting, it might be too late—the precommitment only helps if it’s believed. Delegation is one of the ways you can make a precommitment more believable.
Someone really should write a “Cliffs Notes for Schelling” sequence. I’d naturally prefer “someone else”, but if nobody starts it by December I suppose I’ll try writing an intro post in January.
While corporations don’t have literal source code to modify, operating under a set of procedures that visibly make negotiation impossible, such as having the customer interact with an employee who is not authorized to negotiate, does essentially what you are saying.
Well, you are more powerful because your delegees have fewer choices. “Delegate negotiations to an agent with different source code” seems equivalent to “rewrite your source code” (assuming the agent can’t communicate with you on demand.)
Actually, it seems possibly even more general, since you are always free to revoke the agent later.
As to why the agent would react differently: all other things being equal it wouldn’t. However, we do have the inbuilt instinct to go to irrational lengths against those who try to cheat us, and “corporation delegating to an agent” doesn’t feel like cheating because it’s standard. I suspect that “precommitment not to negotiate”, depending on how it’s expressed, would instinctively look much more like a kind of cheating to most people.
You’re right which means that the answer to the question:
… is “People are crazy; the world is mad.”
The mistake is to conclude that vulnerability to (or dependence on) this kind of tactic must be part of decision theory rather than just something that is effective for most humans.
Let’s go a bit more meta...
The world is imperfect. And we all know it. Therefore, when faced with an imperfection that seems inevitable, we often forgive it.
But people don’t have correct models of the world, so they can’t distinguish reliably between evitable and inevitable imperfections. This can be exploited by creating imperfections which seem inevitable, and which “coincidentally” increase your negotiating power.
For example if you hire agents to represent you, your customers usually can’t tell the difference between the instructions you had to give them (e.g. because of the imperfections of the agents, or possible conflicts between you and the agents), and the instructions you have them deliberately to make life more difficult for your customers. Sometimes your customers even don’t know whether you really had to hire the agents, or you just chose to do so because it gave you a leverage.
The answer is in some form: Customers don’t have full knowledge about what really happened, which includes knowledge about how much their lack of knowledge was used against them.
Aren’t you just neglecting that humans can’t self-modify much?
No, and in particular certainly not just. Even if we decided that “read about some decision theory and better understand how to make decisions” doesn’t qualify as “change your source code” the other option of “just not do them” requires no change.
Have you ever heard of akrasia?
Akrasia is one of thousands of things that I have heard of that do not seem particularly salient to the point.
I mean, among humans “just not doing things” takes, you know, willpower.
Yes, that is what akrasia means. I reaffirm both my ancestor comments.
My point is that in some cases the “option of “just not do them”” does require a change (if you count precommitting devices and the like as changes). There are people who wouldn’t be able to successfully resolve to (say) just stop smoking, they’d have to somehow prevent their future selves from doing so—which does count as a change IMO.
I understand what your are saying about akrasia and maintain that the intended rhetorical point of your question is not especially relevant to it’s context. You are arguing against a position I wouldn’t support so increasingly detailed explanations of something that was trivial to begin with aren’t especially useful.
Obviously quitting smoking counts as change and involves enormous akrasia problems. An example of something that doesn’t count as changing is just not negotiating in a certain situation because you are one of the many people who are predisposed to just not negotiate in such situations. That actually means not changing instead of changing (in response to pressure from a naive decision theory or naive decision theorist that asserts that negotiating is the rational choice when precommitment isn’t possible.)
The problem with MixedNut’s claim:
… wasn’t that humans in fact can self modify a lot (they can’t). The problem was that this premise doesn’t weaken Eliezer’s point significantly even though it is true.