I think there’s a big gap here between what Zvi discusses, and what you’re looking at. Because mazes don’t usually happen in smaller and more legible startups, they happen in large bureaucracies. And as I’ve argued in the past, the structural changes which startups undergo to turn into large corporations are significant and inevitable. And you’re talking about moral mazes not existing while you’re doing things in order to scale, but the key changes happen once you have scaled up.
As I said in that post, the loss of legibility and flexibility are exactly what goes away with scale—and once it gets big enough, middle management changes from “these are the compromises needed to align people with business goals,” to “this is how the system works,” regardless of purpose. And that’s when the mazes start taking over.
From reading your article, it seems like one of the major differences between yours and Zvi’s understanding of ‘Mazes’ is that you’re much more inclined to describe the loss of legibility and flexibility as necessary features of big organizations that have to solve complex problems, rather than something that can be turned up or down quite a bit if you have the right ‘culture’, while not losing size and complexity.
I think I disagree less that you’re assuming. Yes, a large degree of the problem is inevitable due to the nature of people and organizational dynamics, and despite that, of course it can differ between organizations. But I do think culture is only ever a partial solution, because of the nature of scaling and communication in organizations. And re: #2, I had a long-delayed, incomplete draft post on “stakeholder paralysis” that was making many of the points Holden did, until I saw he did it much better and got to abandon it.
I think that sounds right. Even in a totally humane system there’s going to be more indirection as you scale and that leads to more opportunities for error, principal agent problems, etc.
Yes, as I pointed out in the linked piece—and the relationship with principle-agent problems is why the post I linked above led me to thinking about Goodhart’s law.
I think there’s a big gap here between what Zvi discusses, and what you’re looking at. Because mazes don’t usually happen in smaller and more legible startups, they happen in large bureaucracies. And as I’ve argued in the past, the structural changes which startups undergo to turn into large corporations are significant and inevitable. And you’re talking about moral mazes not existing while you’re doing things in order to scale, but the key changes happen once you have scaled up.
As I said in that post, the loss of legibility and flexibility are exactly what goes away with scale—and once it gets big enough, middle management changes from “these are the compromises needed to align people with business goals,” to “this is how the system works,” regardless of purpose. And that’s when the mazes start taking over.
From reading your article, it seems like one of the major differences between yours and Zvi’s understanding of ‘Mazes’ is that you’re much more inclined to describe the loss of legibility and flexibility as necessary features of big organizations that have to solve complex problems, rather than something that can be turned up or down quite a bit if you have the right ‘culture’, while not losing size and complexity.
Holden Karnofsky argued for something similar, i.e. that there’s a very deep and necessary link between ‘buearactatic stagnation’/‘mazes’ and taking the interests of lots of people into account: https://www.cold-takes.com/empowerment-and-stakeholder-management/
I think I disagree less that you’re assuming. Yes, a large degree of the problem is inevitable due to the nature of people and organizational dynamics, and despite that, of course it can differ between organizations. But I do think culture is only ever a partial solution, because of the nature of scaling and communication in organizations. And re: #2, I had a long-delayed, incomplete draft post on “stakeholder paralysis” that was making many of the points Holden did, until I saw he did it much better and got to abandon it.
I think that sounds right. Even in a totally humane system there’s going to be more indirection as you scale and that leads to more opportunities for error, principal agent problems, etc.
Yes, as I pointed out in the linked piece—and the relationship with principle-agent problems is why the post I linked above led me to thinking about Goodhart’s law.