“But that’s your job”: why organisations can work
It’s no secret that corporations, bureaucracies, and governments don’t operate at peak efficiency for ideal goals. Economics has literature on its own version of the problem; on this very site, Zvi has been presenting a terrifying tale of Immoral Mazes, or how politics can eat all the productivity of an organisation. Eliezer has explored similar themes in Inadequate Equilibria.
But reading these various works, especially Zvi’s, has left me with a puzzle: why do most organisations kinda work? Yes, far from maximal efficiency and with many political and mismeasurement issues. But still:
The police spend some of their time pursuing criminals, and enjoy some measure of success.
Mail is generally delivered to the right people, mostly on time, by both governments and private firms.
Infrastructure gets built, repaired, and often maintained.
Garbage is regularly removed, and public places are often cleaned.
So some organisations do something along the lines of what they were supposed to, instead of, I don’t know, spending their time doing interpretive dance seminars. You might say I’ve selected examples where the outcome is clearly measurable; yet even in situations where measuring is difficult, or there is no pressure to measure, we see:
Central banks that set monetary policy a bit too loose or a bit too tight—as opposed to pegging it to random numbers from the expansion of pi.
Education or health systems that might have low or zero marginal impact, but that seem to have a high overall impact—as in, the country is better off with them than completely without them.
A lot of academic research actually uncovers new knowledge.
Many charities spend some of their efforts doing some of the good they claim to do.
For that last example, inefficient charities are particularly fascinating. Efficient charities are easy to understand; so are outright scams. But ones in the middle—how do they happen? To pick one example almost at random, consider Heifer Project International, which claims to give livestock to people in the developing world. In no way is this an efficient use of money, but it seems that Heifer is indeed gifting some animals[1] and also giving some less publicised agricultural advice (that may be more useful than the animals). So, Heifer is inefficient, poorly assessed, different from the image it presents to donors, and possibly counter-productive overall—so why do they seem to spend a significant portion of their money actually doing what they claim to be doing (or maybe even doing better actions than what they claim)?
Reading the Mazes sequence, I’d expect most organisations to become mazes, and most mazes to be utterly useless—so how do we explain these inefficient-but-clearly-more-efficient-than-they-should-be cases?
I’ll suggest one possible explanation here: that there is a surprising power in officially assigning a job to someone.
“You are the special people’s commissar for delivering mail”
Let’s imagine that you are put in charge for delivering mail in the tri-state area. It doesn’t matter if you’re in a corporation, a government, a charity, or whatever: that’s your official job.
Let us warm the hearts of Zvi and Robin Hanson and assume that everyone is cynical and self-interested. You don’t care about delivering mail in the tri-state area. Your superiors and subordinates don’t care. Your colleagues and rivals don’t care. The people having the mail delivered do care, but they aren’t important, and don’t matter to anyone important. Also, everyone is a hypocrite.
Why might you nevertheless try and have the mail delivered, and maybe even design a not-completely-vacuous measurement criteria to assess your own performance?
Hypocrisy and lip service, to the rescue!
What will happen if you make no effort to deliver the mail? Well, the locals may be unhappy; there may be a few riots, easily put down. Nevertheless, you will get a reputation for incompetence, and for causing problems. Incompetence might make you useful in some situations, but it makes you a poor ally (and remember the halo effect: people will assume you’re generally incompetent). And a reputation for causing problems is definitely not an asset.
Most important of all, you have made yourself vulnerable. If a rival or superior wants to push you out, they have a ready-made excuse: you messed up your job. It doesn’t matter that they don’t care about that, that you don’t care about that, and that everyone knows that. It’s a lot easier to push someone out for a reason—a reason everyone will hypocritically pay lip service to—than for no reason at all. If you seem competent, your allies can mutter (or shout)[2] about unfair demotion or dubious power-plays; if you seem incompetent, they have to first get over that rhetorical barrier before they can start defending you—which they may thus not even try to do. And they’ll always be thinking “we wouldn’t have this problem, if only you’d just done your job”.
You’re also the scapegoat if the riots are more severe than expected, or if the locals have unexpected allies. And many superiors like their subordinates to follow orders, even - especially? - when they don’t care about the orders themselves.
Ok, but why don’t you just lie and wirehead the measurement variable? Have a number on your spreadsheet labelled “number of letter delivered”, don’t deliver any mail, but just update that number. When people riot, say they’re lying, anti-whatever saboteurs.
Ah, but you’re still making yourself vulnerable, now to your subordinates as well. If anyone uncovers your fraud, or passes the information on to the right ear, then they have an even better excuse to get rid of you (or to blackmail you, or make other uses of you). Defending a clear fraud is even harder than defending incompetence. Completely gaming the system is dangerous for your career prospects.
That doesn’t mean that you’re going to produce some idealised, perfect measure of mail-receiver satisfaction. You want some measurement that not obviously a fraud, and that it would take some effort to show is unreliable (bonus points if the unreliable parts are “clearly not your fault”, or “the standard way of doing things in that area”). Then, armed with this proxy, you’ll attempt to get at least a decent “mail delivering score”, decent enough to not make you vulnerable.
Of course, your carefully constructed “mail delivering score” is not equal with actual mail-receiver satisfaction. But nor can you get away with making it completely unrelated. Your own success is loosely correlated with your actual job.
And so the standard human behaviour emerges: you’ll cheat, but not too much, and not too obviously. Or, put the other way, you’ll do your official job, to a medium extent, just because that is your official job.
It seems like if you add evolution to the mix over time the worst abuses will be weeded out because they are no longer suitable as a lip-service performance but will make you vulnerable.
It seems we could derive a weak law of progress: Over time not the best systems will win but the worst will die out—raising the sanity water line from below.
The difference between “best systems win” and “worst systems lose” is only one of timeframe. The two differ in filter effectiveness per iteration on the way to equilibrium.
Why won’t the best systems win?
Maybe it’s more that the best systems need not win. They might, but that’s not guaranteed.
I think ‘best systems’ refers to those that would be best for their object-level purpose, e.g. delivering mail as efficiently as possible. (But too much efficiency would be literally terrible for the people that work there and at least a small number want to cheat anyways.)
You could also consider a more Darwinian interpretation of which are the ‘best systems’ – they would be the ones that receive the most resources but provide the minimum products or services demanded, i.e. produce the largest ‘internal profit’ – but survive indefinitely. (And for cheaters, these systems are paradises.)
But I think the key negative feedback explaining why immoral mazes mostly still work is more likely that other people really do care, to some degree, that their job gets done. Apparently, many DMV offices in the U.S. are much better than they had been in the past. And the systems themselves can screw up enough and ‘get themselves killed’, e.g. closed, disbanded, or broken-up; or individuals in the system can be directly punished, e.g. fined, imprisoned, executed. There’s a significant amount of outside pressure that can be brought to bear.
That’s probably also why it’s the insides of large hierarchies become the most dense immoral mazes. The leaders ‘on the surface’ are (relatively) public figures and thus default targets for punishment. Also, other immoral mazes probably depend on their work being done, at least for them! And the workers are in direct contact with whatever the relevant portion of object level reality there is with which to interact to do their jobs. At that level, there very much is a pronounced ‘but that’s (not) my job’ operating.
Your point is that is all boils down to accountability, then. Not because of justice, but because failing on some aspects of your job for which you are held accountable by people on the outside (like not delivering the mail for the mail company, or polluting for the eco-friendly company) makes you vulnerable, and thus is really dangerous for your self-interest.
The fully cynical worldview is a bit too much for me, but I feel this explains a lot within this view.
Social pressure to conform (in doing a recognizable job to get respect from people around you) is a great explainer for mazes. They tend to be cases where it’s hard to tell if you’re doing the job.
This might also beg the question, are the identified maze problems problems of comparing to some ideal state or comparing to some actually realizable state the is more a reality than dream.
Errata:
Kakonomics
How do you find this concept relevant to the article?
Background for people not familiar with the term:
https://www.edge.org/response-detail/10993
Kakonomics describes cases where people not only have standard preferences to receive a High-quality good and deliver a Low-quality one (the standard sucker’s payoff) but they actually prefer to deliver a Low-quality good and receive a Low-quality one, that is, they connive on a Low-Low exchange.
It is the additional structure to flesh out the ‘just because’ clause of the final sentence. it might not provide a full accounting of such but seems like a very significant piece.