Central theme in: Immoral Mazes Sequence, but this generalizes.
When looking to succeed, pain is not the unit of effort, and money is a, if not the, unit of caring.
One is not always looking to succeed.
Here is a common type of problem.
You are married, and want to take your spouse out to a romantic dinner. You can choose the place your spouse loves best, or the place you love best.
A middle manager is working their way up the corporate ladder, and must choose how to get the factory to improve its production of widgets. A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually it poisons the water supply, or a policy that would prevent that but at additional cost.
A politician can choose between a bill that helps the general population, or a bill that helps their biggest campaign contributor.
A start-up founder can choose between building a quality product without technical debt, or creating a hockey stick graph that will appeal to investors.
You can choose to make a gift yourself. This would be expensive in terms of your time and be lower quality, but be more thoughtful and cheaper. Or you could buy one in the store, which would be higher quality and take less time, but feel generic and cost more money.
You are cold. You can buy a cheap scarf, or a better but more expensive scarf.
These are trade-offs. Sometimes one choice will be made, sometimes the other.
Now consider another type of problem.
You are married, and want to take your spouse out to a romantic dinner. You could choose a place you both love, or a place that only they love. You choose the place you don’t love, so they will know how much you love them. After all, you didn’t come here for the food.
A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually poisons the water supply, or a policy that would prevent that at no additional cost. He knows that when he is up for promotion, management will want to know the higher ups can count on him to make the quarterly numbers look good and not concern himself with long term issues or what consequences might fall on others. If he cared about not poisoning the water supply, he would not be a reliable political ally. Thus, he chooses the neglectful policy.
A politician can choose between two messages that affirm their loyalty: Advocating a beneficial policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit!
A start-up founder can choose between building a quality product without technical debt and creating a hockey stick graph with it, or building a superficially similar low-quality product with technical debt and using that. Both are equally likely to create the necessary graph, and both take about the same amount of effort, time and money. They choose the low-quality product, so the venture capitalists can appreciate their devotion to creating a hockey stick graph.
You can choose between making a gift and buying a gift. You choose to make a gift, because you are rich and buying something from a store would be meaningless. Or you are poor, so you buy something from a store, because a handmade gift wouldn’t show you care.
Old joke: One Russian oligarch says, “Look at my scarf! I bought it for ten thousand rubles.” The other says, “That’s nothing, I bought the same scarf for twenty thousand rubles.”
What these examples have in common is that there is a strictly better action and a strictly worse action, in terms of physical consequences. In each case, the protagonist chooses the worse action because it is worse.
This choice is made as a costly signal. In particular, to avoid motive ambiguity.
If you choose something better over something worse, you will be suspected of doing so because it was better rather than worse.
If you choose something worse over something better, not only do you show how little you care about making the world better, you show that you care more about people noticing and trusting this lack of caring. It shows your values and loyalties.
In the first example, you care more about your spouse’s view of how much you care about their experience than you care about your own experience.
In the second example, you care more about being seen as focused on your own success than you care about outcomes you won’t be responsible for.
In the third example, you care more about being seen as loyal than about improving the world by being helpful.
In the fourth example, you care about those making decisions over your fate believing that you will focus on the things they believe the next person deciding your fate will care about, so they can turn a profit. They don’t want you distracted by things like product quality.
In the old joke, the oligarchs want to show they have money to burn, and that they care a lot about showing they have lots of money to burn. That they actively want to Get Got to show they don’t care. If someone thought the scarf was bought for mundane utility, that wouldn’t do at all.
One highly effective way to get many people to spend money is to give them a choice to either spend the money, or be slightly socially awkward and admit that they care about not spending the money. Don’t ask what the wine costs, it would ruin the evening.
The warning of Out to Get You is insufficiently cynical. The motive is often not to get your resources, and is instead purely to make your life worse.
Conflict theorists are often insufficiently cynical. We hope the war is about whether to enrich the wealthy or help the people. Often the war is over whether to aim to destroy the wealthy, or aim to hurt the people.
In simulacra terms, these effects are strongest when one desires to be seen as motivated on level three, but these dynamics are potentially present to an important extent for motivations at all levels. Note also that one is not motivated by this dynamic to destroy something unless you might plausibly favor it. If and only if everybody knows you don’t care about poisoning the river, it is safe to not poison it.
This generalizes to time, to pain, to every preference. Hence anything that wants your loyalty will do its best to ask you to sacrifice and destroy everything you hold dear, because you care about it, to demonstrate you care more about other things.
Worst of all, none of this assumes a zero-sum mentality. At all.
Such behavior doesn’t even need one.
If one has a true zero-sum mentality, as many do, or one maps all results onto a zero-sum social dynamic, all of this is overthinking. All becomes simple. Your loss is my gain, so I want to cause you as much loss as possible.
Pain need not be the unit of effort if it is the unit of scoring.
The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?
This post is based on the book Moral Mazes, which is a 1988 book describing “the way bureaucracy shapes moral consciousness” in US corporate managers. The central point is that it’s possible to imagine relationship and organization structures in which unnecessarily destructive behavior, to self or others, is used as a costly signal of loyalty or status.
Zvi titles the post after what he says these behaviors are trying to avoid, motive ambiguity. He doesn’t label the dynamic itself, so I’ll refer to it here as “disambiguating destruction” (DD). Before proceeding, I want to emphasize that DD is referring to truly pointless destruction for the exclusive purpose of signaling a specific motive, and not to an unavoidable tradeoff.
This raises several questions, which the post doesn’t answer.
Do pointlessly destructive behaviors typically succeed at reducing or eliminating motive ambiguity?
Do they do a better job of reducing motive ambiguity than alternatives?
How common is DD in particular types of institutions, such as relationships, cultures, businesses, and governments?
How do people manage to avoid feeling pressured into DD?
What exactly are the components of DD, so that we can know what to look for when deciding whether to enter into a certain organization or relationship?
Are there other explanations for the components of DD, and how would we distinguish between DD and other possible interpretations of the component behaviors?
We might resort to a couple explanations for (4), the question of how to avoid DD. One is the conjunction of empathy and act utilitarianism. My girlfriend says she wouldn’t want to go to a restaurant only she loves, even if the purpose was to show I love her. Part of her enjoyment is my enjoyment of the experience. If she loved the restaurant only she loves so much that she was desperate to go, then she could go with someone else. She finds the whole idea of destructive disambiguation of love to be distinctly unappealing. The more aware she is of a DD dynamic, the more distasteful she finds it.
Another explanation for (4) is constitutional theory. In a state of nature, people would tend to form communities in which all had agreed not to pressure each other into DD dynamics. So rejecting DD behavior is a way of defending the social contract, which supercedes whatever signaling benefit the DD behavior was supposed to contribute to in particular cases.
As such, for a DD dynamic to exist consistently, it probably needs to be in a low-empathy situation, in which there is little-no ability to enforce a social contract, where the value of motive disambiguation is very high, and where there are destructive acts that can successfully reduce ambiguity. It could also be the result of stupidity: people falsely believing that DD will accomplish what it is described as accomplishing here and bring them some selfish benefit. As such, a description of DD might constitute an infohazard of sorts—though it seems to me to be very far from anything like sharing the genome of smallpox or the nuclear launch codes.
It seems challenging to successfully disambiguate motives with destructive behavior, because they expose the person enacting DD to perceptions of incompetence. Maybe they poisoned the water supply because they wanted to show loyalty, or maybe they did it because they’re too incompetent to know how to maintain the factory without causing pollution. Maybe they took you to a restaurant they hate because they love you, or maybe it’s because they’re insecure or trying to use it as some sort of a bargaining chip for future negotiations.
All that said, I can imagine scenarios in which a person makes a correct judgment that DD will work as described, brings them the promised benefits, and provides supporting evidence in favor of DD as an effective strategy for acquiring status. This does indeed seem bad. One way to explain how this could be done is the idea of a cover story, a reasonable-sounding explanation for the behavior that all involved know is false, and serves simultaneously as evidence to external parties that the behavior was reasonable and evidence to internal parties that the behavior ought to be interpreted as DD.
But we also need to explain why DD is not only the best way to affirm loyalty, but the best overall way to affirm the things that loyalty is meant to accomplish. For example, loyalty is often meant to contribute to group survival, such as among soldiers. Even if DD is the best way to contribute to display a soldier’s loyalty, it could be that it has side effects that diminish the health of the group, such as diminishing the appeal of military service to potential recruits.
Band of Brothers is a dramatic reenactment of the true story of Easy Company, paratroopers in WWII. Their captain, Herbert Sobel, put them through all kinds of hazing rituals. Examples include offering the company a big spaghetti dinner only to surprise them in the middle by forcing them to run up a mountain, causing the soldiers to vomit halfway up; or forcing the soldiers to inflict cruel punishments on each other.
Ultimately, Sobel loses the loyalty of his troops, not due to his strictness, but due to his incompetence in making command decisions in training exercises. They mutiny, and Sobel is replaced. Despite their dislike of Sobel, some soldiers think he did cause the soldiers to become particularly loyal to each other, though there are also many other mechanisms by which the soldiers were both selected for loyalty and had opportunities to demonstrate it. It’s not at all clear that Sobel’s pointlessly harsh treatment was overall beneficial to the military, though his rigor as a trainer does seem to have been appreciated.
This suggests another explanation for DD, which is that the person enacting it may find the capacity to be strict and punitive to be useful in other contexts, and just not have the discernment to distinguish between appropriate and inappropriate contexts. Or DD might be a form of training in order to enable the perpetrator to enact strictness in non-destructive contexts. To really work as a motive disambiguation, these also need to be ruled out.
Taken all together, we have lots of reasons to think that DD ought to be rare.
We can create constitutions, explicit or implicit, that bar DD. These constitutions can be on many group levels: a management group, corporation, and whole industry might create multi-layered anti-DD constitutions, on the level of explicit contracts or, more likely, implicit or informal norms.
DD needs to affirm loyalty without creating overal negative side effects for the group or for the person sending the signal.
DD needs to reduce ambiguity on net, and destructive behaviors invite explanations other than loyalty signals.
All of these suggest to me that we should have a low prior that any given act is best explained as an example of DD. First, we’d want to resort to explanations such as tradeoffs, incompetence, negative outcomes from risky decisions, value differences, an attempt to “rescue” a mistake or example of incompetence and reframe it as a signal of loyalty post-hoc, and our own lack of information. These are common causes of destructiveness. Loyalty-based relationships are extremely common, so destructive behavior will often be associated with a loyalty-based relationship, and test those bonds of loyalty. There are so many plausible alternative explanations that we should require some extraordinary evidence that a particular behavior is a central case of destructive disambiguation.
I think a counterargument here is that DD is the “cover story” hypothesis I referred to earlier. If we are supposing that DD is common enough to be a serious problem in our society, perhaps we are also assuming that cover stories are effective enough that it will be very hard to find examples that are obvious to outsiders. It’s a little like sex in a stereotypical “Victorian” society: it’s obviously happening (we see the children), but everybody’s taking pains to disguise it, and if you didn’t know that sex existed, you might never figure it out and it would sound very implausible if explained to you.
Of course, with the sex analogy, even Victorians would eventually figure out that sex existed. Likewise, if DD is happening all the time, then people ought to be able to consult their lived experience to find ready examples. I personally find it to be alien to my experience, and others seem to feel the same way in the comment section here. My girlfriend goes further, feeling that it’s not only alien, but a repugnant concept to even discuss. I can recall conversations with my uncle, who spent his career in the wine industry, and he says his company took a strong stand against doing any business in corrupt countries, even if there were profits to be made. These are just anecdotes, but I think it’s necessary to start by resorting to them in this case.
If DD has been operationalized and subjected to scientific study, I would be interested to read the studies. But I would subject them to scrutiny along the lines I’ve outlined here. It would be a disturbing finding if robust evidence led us to conclude that DD is pervasive, but I suspect that we’d find out that the disturbing features of human behavior have alternative explanations.
Initial reaction: I like this post a lot. It’s short, to the point. It has examples relating its concept to several different areas of life: relationships, business, politics, fashion. It demonstrates a fucky dynamic that in hindsight obviously game-theoretically exists, and gives me an “oh shit” reaction.
Meditating a bit on an itch I had: what this post doesn’t tell me is how common this dynamic, or how to detect when it’s happening.
While writing this review: hm, is this dynamic meaningfully different from the idea of a costly signal?
Thinking about the examples:
Seems believably common, but also kind of like a fairly normal not-very-fucky costly signal.
(And, your spouse probably wants to come to both restaurants sometimes. And they probably come to the place-only-they-love less than they ideally would, because you don’t love it.)
Gets fuckier if they’d enjoy the place-you-both-love more than the place-only-they-love. Then the cost isn’t just your own enjoyment; it’s a signal you care about something, at the cost of the thing you supposedly care about. Still believable, but feels less common.
(Didn’t work for me, but I think the link is supposed to highlight the following phrase in the second paragraph: “The Immoral Mazes sequence is an exploration of what causes that hell, and how and why it has spread so widely in our society. Its thesis is that this is the result of a vicious cycle arising from competitive pressures among those competing for their own organizational advancement.”)
Ah, so I think this has an important difference from a normal “costly signal”, in that the cost is to other people.
I read the immoral mazes sequence at the time, but don’t remember in depth. I could believe that it reliably attests that this sort of thing happens a lot.
Again, the cost is to other people, not the politician.
How often do politicians get choices like this? I think a more concrete example would be helpful for me here, even if it was fictional. (But non-fiction would be better, and if it’s difficult to find one… that doesn’t mean it doesn’t happen, real-world complications might mean we don’t know about them and/or can’t be sure that’s what’s going on with them. But still, if it’s difficult to find a real-world example of this, that says something bad.)
(Is the point of the link to the pledge of allegiance simply that that’s a signal of loyalty that costs a bit of time? I’m not American and didn’t read the article in depth, I could be missing something.)
The claimed effect here is: the more investors know about your code quality, the more incentive you have to write bad code. I could tell an opposite story, where if they know you have bad code they expect that to hurt your odds of maintaining a hockey-stick graph.
So this example only seems to hold up if
Investors know about your code quality.
They care about what your code quality says about loyalty to their interests, more than what it says about your ability to satisfy their interests.
I’m not convinced on either point.
If it does hold up… yeah, it’s fucky in that it’s another case of “signal you care about a thing by damaging the thing”.
Mostly just seems like another fairly straightforward costly signal? Not very fucky.
Same as above. I think this is what economists call a Veblen good.
I don’t think the article convinces me to be significantly more concerned about regular costly signals than I currently do, where the cost is entirely on the person sending the signal. That’s two and a half of the examples, and they seem like the least-fucky. I think… if I’m supposed to care about those, I’d probably want an article that specifically focuses on them, rather than mixing them with fuckier things.
The ones where (some of) the cost is to other people, or to the thing-supposedly-cared-about, are more worrying. But there’s also not enough detail here to convince me those are very common. And the example closest to my area of expertise is the one I don’t believe. I feel like a lot of my worry about these, in my initial reading, came from association with the others, that having separated out I find more-believable and less-fucky. I don’t think it’s deliberate, but I feel vaguely Motte-and-Baileyed.
Writing this has moved me from positive on the review to slightly negative. (Strictly speaking I didn’t cast a vote before, but I was expecting it would have been positive.) But I won’t be shocked if something I’ve missed brings me back to positive.