Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:
I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber’s rule 6 that you highlight as an example. As represented in the post it states “The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive.” I take this as rules and regulation need to be “learnable” not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of “more or less firm and more or less comprehensive” seems akin to more of a coherence condition rather than one that is static
This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
One note here is that for me an ideal bureaucracy structure doesn’t need to perfectly replicate Weber’s description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber’s conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
I think I share your concern with a single bureaucracy of AI’s being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI’s that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon’s Administrative Behavior (which is generalized to other types of contexts in his “The Sciences of the Artificial”). Another is Elinor Ostrom’s Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.
Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed!
My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
I think this is the essential question that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?
One way to answer such a question might be to look at the asymptotics of the situation. Suppose that the number of “rules” governing an organization is proportional to the size of the organization. The question would then be does the complexity of the coordination problem also increase only linearly as well? If so, it is reasonable to suppose that humans (with a finite capacity) would face a coordination problem but AI would not.
Suppose instead that the complexity of the coordination problem increases with the square of organization size. In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.
Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem? In this case, the complexity of “fixing” a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.
My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations. In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.
It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
Suppose the coordination problem is indeed intractable. That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects. In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information.
There is undoubtedly a continuum of solutions between “survival of the fittest” capitalistic competition and “rules abiding” bureaucratic management. The discovery of new “points” on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.
To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy. Google’s 20% time is another example of purposely reducing management overhead in order to foster innovation.
It would be awesome if one could “fine tune” the level of competitiveness and thereby choose any point on this continuum. If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.
Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:
I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber’s rule 6 that you highlight as an example. As represented in the post it states “The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive.” I take this as rules and regulation need to be “learnable” not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of “more or less firm and more or less comprehensive” seems akin to more of a coherence condition rather than one that is static
This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
One note here is that for me an ideal bureaucracy structure doesn’t need to perfectly replicate Weber’s description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber’s conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
I think I share your concern with a single bureaucracy of AI’s being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI’s that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It’s not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon’s Administrative Behavior (which is generalized to other types of contexts in his “The Sciences of the Artificial”). Another is Elinor Ostrom’s Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.
Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed!
I think this is the essential question that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?
One way to answer such a question might be to look at the asymptotics of the situation. Suppose that the number of “rules” governing an organization is proportional to the size of the organization. The question would then be does the complexity of the coordination problem also increase only linearly as well? If so, it is reasonable to suppose that humans (with a finite capacity) would face a coordination problem but AI would not.
Suppose instead that the complexity of the coordination problem increases with the square of organization size. In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.
Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem? In this case, the complexity of “fixing” a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.
If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations. In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.
Suppose the coordination problem is indeed intractable. That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects. In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .
There is undoubtedly a continuum of solutions between “survival of the fittest” capitalistic competition and “rules abiding” bureaucratic management. The discovery of new “points” on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.
To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy. Google’s 20% time is another example of purposely reducing management overhead in order to foster innovation.
It would be awesome if one could “fine tune” the level of competitiveness and thereby choose any point on this continuum. If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.