Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.
In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.
Local capabilities of sub-agents raise many issues of coordination that can’t just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisition, otherwise they confer no advantage. In general these local capabilities may cause divergent choices that require negotiation to generate re-convergence between agents. This implies that the assumption of a unified dominant AGI that can scale indefinitely is dubious at best.
Let’s look at a specific issue here.
Loyalty is a major issue, directly and indirectly referenced in other comments. Without reliable loyalty, principal—agent problems can easily become crippling. But another term for loyalty is goal alignment. So in effect an AGI has to solve the problem of goal alignment to grow indefinitely by spawning sub-agents.
Corporations solve the problem of alignment internally by inculcating employees with their culture. However that culture becomes a constraint on their possible responses to challenges, and that can kill them—see many many companies whose culture drove success and then failure.
An AGI with a large population of sub-agents is different in many ways but has no obvious way to escape this failure mode. A change in culture implies changes in goals and behavioral constraints for some sub-agents, quite possibly all. But
this can easily have unintended consequences that the AGI can’t figure out since the sub-agents collectively have far more degrees of freedom than the central planner, and
the change in goals and constraints can easily trash sub-agents’ existing plans and advantages, again in ways the central planner in general can’t anticipate.
To avoid taking the analogy “humans : AGIs” too far, there are a few important differences. Humans cannot be copied. Humans cannot be quickly and reliably reprogrammed. Humans have their own goals besides the goals of the corporation. None of this needs to apply to computer sub-agents.
Also, we have systems where humans are more obedient than usual: cults and armies. But cults need to keep their members uninformed about the larger picture, and armies specialize at fighting (as opposed to e.g. productive economical activities). The AGI society could be like a cult, but without keeping members in the dark, because the sub-agents would genuinely want to serve their master. And could be economically active, with the army levels of discipline.
Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.
In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.
Local capabilities of sub-agents raise many issues of coordination that can’t just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisition, otherwise they confer no advantage. In general these local capabilities may cause divergent choices that require negotiation to generate re-convergence between agents. This implies that the assumption of a unified dominant AGI that can scale indefinitely is dubious at best.
Let’s look at a specific issue here.
Loyalty is a major issue, directly and indirectly referenced in other comments. Without reliable loyalty, principal—agent problems can easily become crippling. But another term for loyalty is goal alignment. So in effect an AGI has to solve the problem of goal alignment to grow indefinitely by spawning sub-agents.
Corporations solve the problem of alignment internally by inculcating employees with their culture. However that culture becomes a constraint on their possible responses to challenges, and that can kill them—see many many companies whose culture drove success and then failure.
An AGI with a large population of sub-agents is different in many ways but has no obvious way to escape this failure mode. A change in culture implies changes in goals and behavioral constraints for some sub-agents, quite possibly all. But
this can easily have unintended consequences that the AGI can’t figure out since the sub-agents collectively have far more degrees of freedom than the central planner, and
the change in goals and constraints can easily trash sub-agents’ existing plans and advantages, again in ways the central planner in general can’t anticipate.
To avoid taking the analogy “humans : AGIs” too far, there are a few important differences. Humans cannot be copied. Humans cannot be quickly and reliably reprogrammed. Humans have their own goals besides the goals of the corporation. None of this needs to apply to computer sub-agents.
Also, we have systems where humans are more obedient than usual: cults and armies. But cults need to keep their members uninformed about the larger picture, and armies specialize at fighting (as opposed to e.g. productive economical activities). The AGI society could be like a cult, but without keeping members in the dark, because the sub-agents would genuinely want to serve their master. And could be economically active, with the army levels of discipline.