My own take is that moral mazes should be considered in the “interesting hypothesis” stage, and that the next step is to actually figure out how to go be empirical about checking it.
I made some cursory attempts at this last year, and then found myself unsure this was even the right question. The core operationalization I wanted was something like:
Does having more layers of management introduce pathologies into an organization?
How much value is generated by organizations scaling up?
Can you reap the benefits of organizations scaling up by instead having them splinter off?
(The “middle management == disconnected from reality == bad” hypothesis was the most clear-cut of the moral maze model to me, although I don’t think it was the only part of the model)
I have some disagreements with Zvi about this.
I chatted briefly with habryka about this and I think he said something like “it seems like a more useful question is to look for positive examples of orgs that work well, rather than try and tease out various negative ways orgs could fail to work.”
I think there are maybe two overarching questions this is all relevant to:
How should the rationality / xrisk / EA community handle scale? Should we be worried about introducing middle-management into ourselves?
What’s up with civilization? Is maziness a major bottleneck on humanity? Should we try to do anything about it? (My default answer here is “there’s not much to be done here, simply because the world is full of hard problems and this one doesn’t seem very tractable even if the models are straightforwardly true.” But, I do think this is a contender for humanity hamming problem)
My own take is that moral mazes should be considered in the “interesting hypothesis” stage, and that the next step is to actually figure out how to go be empirical about checking it.
I made some cursory attempts at this last year, and then found myself unsure this was even the right question. The core operationalization I wanted was something like:
Does having more layers of management introduce pathologies into an organization?
How much value is generated by organizations scaling up?
Can you reap the benefits of organizations scaling up by instead having them splinter off?
(The “middle management == disconnected from reality == bad” hypothesis was the most clear-cut of the moral maze model to me, although I don’t think it was the only part of the model)
I have some disagreements with Zvi about this.
I chatted briefly with habryka about this and I think he said something like “it seems like a more useful question is to look for positive examples of orgs that work well, rather than try and tease out various negative ways orgs could fail to work.”
I think there are maybe two overarching questions this is all relevant to:
How should the rationality / xrisk / EA community handle scale? Should we be worried about introducing middle-management into ourselves?
What’s up with civilization? Is maziness a major bottleneck on humanity? Should we try to do anything about it? (My default answer here is “there’s not much to be done here, simply because the world is full of hard problems and this one doesn’t seem very tractable even if the models are straightforwardly true.” But, I do think this is a contender for humanity hamming problem)