Good guess, but that’s not cruxy for me. Yes, LDT/FDT-style things are one possibility. But even if those fail, I still expect non-hierarchical coordination mechanisms among highly capable agents.
Gesturing more at where the intuition comes from: compare hierarchical management to markets, as a control mechanism. Markets require clean factorization—a production problem needs to be factored into production of standardized, verifiable intermediate goods in order for markets to handle the production pipeline well. If that can be done, then markets scale very well, they pass exactly the information and incentives people need (in the form of prices). Hierarchies, in contrast, scale very poorly. They provide basically-zero built-in mechanisms for passing the right information between agents, or for providing precise incentives to each agent. They’re the sort of thing which can work ok at small scale, where the person at the top can track everything going on everywhere, but quickly become extremely bottlenecked on the top person as you scale up. And you can see this pretty clearly at real-world companies: past a very small size, companies are usually extremely bottlenecked on the attention of top executives, because lower-level people lack the incentives/information to coordinate on their own across different parts of the company.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
On the other hand, while dominance relations scale very poorly as a coordination mechanism, they are algorithmically relatively simple. Thus my claim from the post that dominance seems like a hack for low-capability agents, and higher-capability agents will mostly rely on some other coordination mechanism.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
My model probably mostly resembles this situation. Some ~singular AI will maintain a monopoly on violence. Maybe it will use all the resources in the solar system, leaving no space for anyone else. Alternatively (for instance if alignment succeeds), it will leave one or more sources of resources that other agents can use. If the dominant AI fully protects these smaller agents from each other, then they’ll handle their basic preferences and mostly withdraw into their own world, ending the hierarchy. If the dominant AI has some preference for who to favor, or leaves some options for aggression/exploitation which don’t threaten the dominant AI, then someone is going to win this fight, making the hierarchy repeat fractally down.
Main complication to this model is inertia; if human property rights are preserved well enough, then most resources would start out owned by humans, and it would take some time for the economy to equillibrate to the above.
Good guess, but that’s not cruxy for me. Yes, LDT/FDT-style things are one possibility. But even if those fail, I still expect non-hierarchical coordination mechanisms among highly capable agents.
Gesturing more at where the intuition comes from: compare hierarchical management to markets, as a control mechanism. Markets require clean factorization—a production problem needs to be factored into production of standardized, verifiable intermediate goods in order for markets to handle the production pipeline well. If that can be done, then markets scale very well, they pass exactly the information and incentives people need (in the form of prices). Hierarchies, in contrast, scale very poorly. They provide basically-zero built-in mechanisms for passing the right information between agents, or for providing precise incentives to each agent. They’re the sort of thing which can work ok at small scale, where the person at the top can track everything going on everywhere, but quickly become extremely bottlenecked on the top person as you scale up. And you can see this pretty clearly at real-world companies: past a very small size, companies are usually extremely bottlenecked on the attention of top executives, because lower-level people lack the incentives/information to coordinate on their own across different parts of the company.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
On the other hand, while dominance relations scale very poorly as a coordination mechanism, they are algorithmically relatively simple. Thus my claim from the post that dominance seems like a hack for low-capability agents, and higher-capability agents will mostly rely on some other coordination mechanism.
My model probably mostly resembles this situation. Some ~singular AI will maintain a monopoly on violence. Maybe it will use all the resources in the solar system, leaving no space for anyone else. Alternatively (for instance if alignment succeeds), it will leave one or more sources of resources that other agents can use. If the dominant AI fully protects these smaller agents from each other, then they’ll handle their basic preferences and mostly withdraw into their own world, ending the hierarchy. If the dominant AI has some preference for who to favor, or leaves some options for aggression/exploitation which don’t threaten the dominant AI, then someone is going to win this fight, making the hierarchy repeat fractally down.
Main complication to this model is inertia; if human property rights are preserved well enough, then most resources would start out owned by humans, and it would take some time for the economy to equillibrate to the above.