While I do think the rise of bureaucracies is inevitable, it’s important to remember that there is a tradeoff between bureaucracy and innovation.
I’m not sure that the statement
A mature bureaucracy is an almost indestructible social structure.
is false so much as sub-optimal. The easiest place to see this is in businesses. Businesses follow a fairly predictable cycle:
A new business is created that has access to some new innovative idea or technology that allows it to overcome entrenched rivals
As the business grows, it develops a bureaucracy that allows its actions to more efficiently “mine” its advantage and fend off competitors. This increased business efficiency comes at a tradeoff with innovation. The company becomes better at the things it does and worse at the things it doesn’t.
Eventually the world changes and what was once a business advantage is now a disadvantage. However the business is now too entrenched in its own bureaucracy to change.
New innovative rivals appear and destroy what remains of the business.
Companies never become less bureaucratic, but they do get killed by startups that haven’t become bureaucratic yet, which amounts to the same thing.
I suspect that a similar cycle plays out in the realm of public governance as well, albeit on a much larger time scale. Consider the Chinese concept of the Mandate of Heaven. As governments age, they gradually become less responsive to the needs of the people until they are ultimately overthrown. Indeed, one of the primary advantages of multi-party democracy is that the ruling party can be periodically overthrown without burning down the entire country first.
The basic energy behind this process is rule 6 of your bureaucratic process
The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive
Because bureaucracies follow a fixed set of rules (and because rules are much more likely to be added than repealed), the action of the bureaucracy becomes more stratified over time. This stratification leads to paralysis because no individual agent is capable of change, even if they know what change is needed and want to implement it. Creating a bureaucracy is creating a giant coordination problem that can only be solved by replacing the bureaucracy.
What does any of this mean for AI?
Will we use bureaucracies to govern AI? Yes, of course we will. I am doing some work with GPT-3 and they have already developed a set of rules governing its use, and a series of procedures for determining if those rules are being followed.
Can we imagine a single “perfect bureaucracy” that will govern all of AI on behalf of humans? No. Just like businesses and governments need to periodically die in order to allow innovation, so will the bureaucracies that govern AI. Indeed one sub-optimal singularity would be if a single bureaucracy of AIs became so powerful that it could never be overthrown. This would hopefully leave humans much better off than they are today, but permanently locked in at whatever level of development the bureaucracy had reached prior to ossification.
Is there some post-bureaucracy governance model that can give us the predictability/controllability of bureaucracy without the tradeoff of lost innovation? If you consider a marketplace with capitalistic competition a “structure”, then sure. If AI is somehow able to solve the coordination problem that leads to ossification of bureaucracy (perhaps this is a result of the limits on human’s cognitive abilities), then maybe? I feel like the tradeoff between rigid predictable rules and innovation is more fundamental than just the coordination problem, but I could be wrong.
Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:
I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber’s rule 6 that you highlight as an example. As represented in the post it states “The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive.” I take this as rules and regulation need to be “learnable” not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of “more or less firm and more or less comprehensive” seems akin to more of a coherence condition rather than one that is static
This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
One note here is that for me an ideal bureaucracy structure doesn’t need to perfectly replicate Weber’s description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber’s conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
I think I share your concern with a single bureaucracy of AI’s being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI’s that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon’s Administrative Behavior (which is generalized to other types of contexts in his “The Sciences of the Artificial”). Another is Elinor Ostrom’s Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.
Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed!
My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
I think this is the essential question that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?
One way to answer such a question might be to look at the asymptotics of the situation. Suppose that the number of “rules” governing an organization is proportional to the size of the organization. The question would then be does the complexity of the coordination problem also increase only linearly as well? If so, it is reasonable to suppose that humans (with a finite capacity) would face a coordination problem but AI would not.
Suppose instead that the complexity of the coordination problem increases with the square of organization size. In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.
Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem? In this case, the complexity of “fixing” a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.
My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations. In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.
It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
Suppose the coordination problem is indeed intractable. That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects. In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information.
There is undoubtedly a continuum of solutions between “survival of the fittest” capitalistic competition and “rules abiding” bureaucratic management. The discovery of new “points” on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.
To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy. Google’s 20% time is another example of purposely reducing management overhead in order to foster innovation.
It would be awesome if one could “fine tune” the level of competitiveness and thereby choose any point on this continuum. If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.
While I do think the rise of bureaucracies is inevitable, it’s important to remember that there is a tradeoff between bureaucracy and innovation.
I’m not sure that the statement
is false so much as sub-optimal. The easiest place to see this is in businesses. Businesses follow a fairly predictable cycle:
A new business is created that has access to some new innovative idea or technology that allows it to overcome entrenched rivals
As the business grows, it develops a bureaucracy that allows its actions to more efficiently “mine” its advantage and fend off competitors. This increased business efficiency comes at a tradeoff with innovation. The company becomes better at the things it does and worse at the things it doesn’t.
Eventually the world changes and what was once a business advantage is now a disadvantage. However the business is now too entrenched in its own bureaucracy to change.
New innovative rivals appear and destroy what remains of the business.
Or, to quote Paul Graham:
I suspect that a similar cycle plays out in the realm of public governance as well, albeit on a much larger time scale. Consider the Chinese concept of the Mandate of Heaven. As governments age, they gradually become less responsive to the needs of the people until they are ultimately overthrown. Indeed, one of the primary advantages of multi-party democracy is that the ruling party can be periodically overthrown without burning down the entire country first.
The basic energy behind this process is rule 6 of your bureaucratic process
Because bureaucracies follow a fixed set of rules (and because rules are much more likely to be added than repealed), the action of the bureaucracy becomes more stratified over time. This stratification leads to paralysis because no individual agent is capable of change, even if they know what change is needed and want to implement it. Creating a bureaucracy is creating a giant coordination problem that can only be solved by replacing the bureaucracy.
What does any of this mean for AI?
Will we use bureaucracies to govern AI? Yes, of course we will. I am doing some work with GPT-3 and they have already developed a set of rules governing its use, and a series of procedures for determining if those rules are being followed.
Can we imagine a single “perfect bureaucracy” that will govern all of AI on behalf of humans? No. Just like businesses and governments need to periodically die in order to allow innovation, so will the bureaucracies that govern AI. Indeed one sub-optimal singularity would be if a single bureaucracy of AIs became so powerful that it could never be overthrown. This would hopefully leave humans much better off than they are today, but permanently locked in at whatever level of development the bureaucracy had reached prior to ossification.
Is there some post-bureaucracy governance model that can give us the predictability/controllability of bureaucracy without the tradeoff of lost innovation? If you consider a marketplace with capitalistic competition a “structure”, then sure. If AI is somehow able to solve the coordination problem that leads to ossification of bureaucracy (perhaps this is a result of the limits on human’s cognitive abilities), then maybe? I feel like the tradeoff between rigid predictable rules and innovation is more fundamental than just the coordination problem, but I could be wrong.
Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:
I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber’s rule 6 that you highlight as an example. As represented in the post it states “The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive.” I take this as rules and regulation need to be “learnable” not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of “more or less firm and more or less comprehensive” seems akin to more of a coherence condition rather than one that is static
This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
One note here is that for me an ideal bureaucracy structure doesn’t need to perfectly replicate Weber’s description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber’s conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
I think I share your concern with a single bureaucracy of AI’s being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI’s that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon’s Administrative Behavior (which is generalized to other types of contexts in his “The Sciences of the Artificial”). Another is Elinor Ostrom’s Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.
Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed!
I think this is the essential question that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?
One way to answer such a question might be to look at the asymptotics of the situation. Suppose that the number of “rules” governing an organization is proportional to the size of the organization. The question would then be does the complexity of the coordination problem also increase only linearly as well? If so, it is reasonable to suppose that humans (with a finite capacity) would face a coordination problem but AI would not.
Suppose instead that the complexity of the coordination problem increases with the square of organization size. In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.
Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem? In this case, the complexity of “fixing” a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.
If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations. In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.
Suppose the coordination problem is indeed intractable. That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects. In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .
There is undoubtedly a continuum of solutions between “survival of the fittest” capitalistic competition and “rules abiding” bureaucratic management. The discovery of new “points” on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.
To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy. Google’s 20% time is another example of purposely reducing management overhead in order to foster innovation.
It would be awesome if one could “fine tune” the level of competitiveness and thereby choose any point on this continuum. If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.