If they’re being smashed in a literal sense, sure. I think the more likely way things would go is that hierarchies just cease to be a stable equilibrium arrangement. For instance, if the bulk of economic activity shifts (either quickly or slowly) to AIs and those AIs coordinate mostly non-hierarchically amongst themselves.
A monopoly on violence is not the only way to coordinate such things—even among humans, at small-to-medium scale we often rely on norms and reputation rather than an explicit enforcer with a monopoly on violence. The reason those mechanisms don’t scale well for humans seems to be (at least in part) that human cognition is tuned for Dunbar’s number.
And even if a monopoly on violence does turn out to be (part of) the convergent way to coordinate, that’s not-at-all synonymous with a dominance hierarchy. For instance, one could imagine the prototypical libertarian paradise in which a government with monopoly on violence enforces property rights and contracts but otherwise leaves people to interact as they please. In that world, there’s one layer of dominance, but no further hierarchy beneath. That one layer is a useful foundational tool for coordination, but most of the day-to-day work of coordinating can then happen via other mechanisms (like e.g. markets).
(I suspect that a government which was just greedy for resources would converge to an arrangement roughly like that, with moderate taxes on property and/or contracts. The reason we don’t see that happen in our world is mostly, I claim, that the humans who run governments usually aren’t just greedy for resources, and instead have a strong craving for dominance as an approximately-terminal goal.)
Given a world of humans, I don’t think that libertarian society would be good enough at preventing competing powers from overthrowing it. Because there’d be an unexploitable-equillibrium condition where a government that isn’t focused on dominance is weaker than a government more focused on government, it would generally be held by those who have the strongest focus on dominance. Those who desire resources would be better off putting themselves in situations where the dominant powers can become more dominant by giving them/putting them in charge of resources.
Given a world of AIs, I don’t think the dominant AI would need a market; it could just handle everything itself.
As I understand it, libertarian paradises are basically fantasies by people who don’t like the government, not realistically-achievable outcomes given the political realities.
Because there’d be an unexploitable-equillibrium condition where a government that isn’t focused on dominance is weaker than a government more focused on government, it would generally be held by those who have the strongest focus on dominance.
This argument only works insofar as governments less focused on dominance are, in fact, weaker militarily, which seems basically-false in practice in the long run. For instance, autocratic regimes just can’t compete industrially with a market economy like e.g. most Western states today, and that industrial difference turns into a comprehensive military advantage with relatively moderate time and investment. And when countries switch to full autocracy, there’s sometimes a short-term military buildup but they tend to end up waaaay behind militarily a few years down the road IIUC.
Maybe one could say the essence of our difference is this:
You see the dominance ranking as defined by the backing-off tendency and assume it to be mainly an evolutionary psychological artifact.
Meanwhile, I see the backing-off tendency as being the primary indicator of dominance, but the core interesting aspect of dominance to be the tendency to leverage credible threats, which of course causes but is not equivalent to the psychological tendency to back off.
Under my model, dominance would then be able to cause bargaining power (e.g. robbing someone by threatening to shoot them), but one could also use bargaining power to purchase dominance (e.g. spending money to purchase a gun).
This leaves dominance and bargaining power independent because on the one hand you have the weak-strong axis where both increase but on the other hand you have the merchant-king axis where they directly trade off.
I guess to expand, the US military doctrine since the world war has been that there’s a need to maintain dominance over countries focused on military strength to the disadvantage of their citizens. Hence while your statement is somewhat-true, it’s directly and intentionally the result of a dominance hierarchy maintained by the US.
Western states today use state violence to enforce high taxes and lots of government regulations. In my view they’re probably more dominance-oriented than states which just leave rural farmers alone. At least some of this is part of a Keynesian policy to boost economic output, and economic output is closely related to military formidability (due to ability to afford raw resources and advanced technology for the military).
Hm, I guess you would see this as more closely related to bargaining power than to dominance, because in your model dominance is a human-psychology-thing and bargaining power isn’t restricted to voluntary transactions?
Good guess, but that’s not cruxy for me. Yes, LDT/FDT-style things are one possibility. But even if those fail, I still expect non-hierarchical coordination mechanisms among highly capable agents.
Gesturing more at where the intuition comes from: compare hierarchical management to markets, as a control mechanism. Markets require clean factorization—a production problem needs to be factored into production of standardized, verifiable intermediate goods in order for markets to handle the production pipeline well. If that can be done, then markets scale very well, they pass exactly the information and incentives people need (in the form of prices). Hierarchies, in contrast, scale very poorly. They provide basically-zero built-in mechanisms for passing the right information between agents, or for providing precise incentives to each agent. They’re the sort of thing which can work ok at small scale, where the person at the top can track everything going on everywhere, but quickly become extremely bottlenecked on the top person as you scale up. And you can see this pretty clearly at real-world companies: past a very small size, companies are usually extremely bottlenecked on the attention of top executives, because lower-level people lack the incentives/information to coordinate on their own across different parts of the company.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
On the other hand, while dominance relations scale very poorly as a coordination mechanism, they are algorithmically relatively simple. Thus my claim from the post that dominance seems like a hack for low-capability agents, and higher-capability agents will mostly rely on some other coordination mechanism.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
My model probably mostly resembles this situation. Some ~singular AI will maintain a monopoly on violence. Maybe it will use all the resources in the solar system, leaving no space for anyone else. Alternatively (for instance if alignment succeeds), it will leave one or more sources of resources that other agents can use. If the dominant AI fully protects these smaller agents from each other, then they’ll handle their basic preferences and mostly withdraw into their own world, ending the hierarchy. If the dominant AI has some preference for who to favor, or leaves some options for aggression/exploitation which don’t threaten the dominant AI, then someone is going to win this fight, making the hierarchy repeat fractally down.
Main complication to this model is inertia; if human property rights are preserved well enough, then most resources would start out owned by humans, and it would take some time for the economy to equillibrate to the above.
Maybe, but I’m not sure it’s even necessary to invoke LDT/FDT/UDT, and instead argue that coordinating even through solely causal methods is very cheap for AIs to the point where coordination, and as a side effect, interfaces become quite a lot less of a bottleneck compared to today.
In essence, I think the diff between John’s models and tailcalled’s models is plausibly in how easy coordination in a more general sense can ever be for AIs, and whether AIs have much better ability to coordinate compared to humans today, where John thinks that coordination is a taut constraint for humans but not for AI, but tailcalled thinks it’s hard to coordinate for both AIs and humans due to fundamental limits.
If they’re being smashed in a literal sense, sure. I think the more likely way things would go is that hierarchies just cease to be a stable equilibrium arrangement. For instance, if the bulk of economic activity shifts (either quickly or slowly) to AIs and those AIs coordinate mostly non-hierarchically amongst themselves.
I would expect the AI society to need some sort of monopoly on violence to coordinate this, which is basically the same as a dominance hierarchy.
A monopoly on violence is not the only way to coordinate such things—even among humans, at small-to-medium scale we often rely on norms and reputation rather than an explicit enforcer with a monopoly on violence. The reason those mechanisms don’t scale well for humans seems to be (at least in part) that human cognition is tuned for Dunbar’s number.
And even if a monopoly on violence does turn out to be (part of) the convergent way to coordinate, that’s not-at-all synonymous with a dominance hierarchy. For instance, one could imagine the prototypical libertarian paradise in which a government with monopoly on violence enforces property rights and contracts but otherwise leaves people to interact as they please. In that world, there’s one layer of dominance, but no further hierarchy beneath. That one layer is a useful foundational tool for coordination, but most of the day-to-day work of coordinating can then happen via other mechanisms (like e.g. markets).
(I suspect that a government which was just greedy for resources would converge to an arrangement roughly like that, with moderate taxes on property and/or contracts. The reason we don’t see that happen in our world is mostly, I claim, that the humans who run governments usually aren’t just greedy for resources, and instead have a strong craving for dominance as an approximately-terminal goal.)
Given a world of humans, I don’t think that libertarian society would be good enough at preventing competing powers from overthrowing it. Because there’d be an unexploitable-equillibrium condition where a government that isn’t focused on dominance is weaker than a government more focused on government, it would generally be held by those who have the strongest focus on dominance. Those who desire resources would be better off putting themselves in situations where the dominant powers can become more dominant by giving them/putting them in charge of resources.
Given a world of AIs, I don’t think the dominant AI would need a market; it could just handle everything itself.
As I understand it, libertarian paradises are basically fantasies by people who don’t like the government, not realistically-achievable outcomes given the political realities.
This argument only works insofar as governments less focused on dominance are, in fact, weaker militarily, which seems basically-false in practice in the long run. For instance, autocratic regimes just can’t compete industrially with a market economy like e.g. most Western states today, and that industrial difference turns into a comprehensive military advantage with relatively moderate time and investment. And when countries switch to full autocracy, there’s sometimes a short-term military buildup but they tend to end up waaaay behind militarily a few years down the road IIUC.
Maybe one could say the essence of our difference is this:
You see the dominance ranking as defined by the backing-off tendency and assume it to be mainly an evolutionary psychological artifact.
Meanwhile, I see the backing-off tendency as being the primary indicator of dominance, but the core interesting aspect of dominance to be the tendency to leverage credible threats, which of course causes but is not equivalent to the psychological tendency to back off.
Under my model, dominance would then be able to cause bargaining power (e.g. robbing someone by threatening to shoot them), but one could also use bargaining power to purchase dominance (e.g. spending money to purchase a gun).
This leaves dominance and bargaining power independent because on the one hand you have the weak-strong axis where both increase but on the other hand you have the merchant-king axis where they directly trade off.
I guess to expand, the US military doctrine since the world war has been that there’s a need to maintain dominance over countries focused on military strength to the disadvantage of their citizens. Hence while your statement is somewhat-true, it’s directly and intentionally the result of a dominance hierarchy maintained by the US.
Western states today use state violence to enforce high taxes and lots of government regulations. In my view they’re probably more dominance-oriented than states which just leave rural farmers alone. At least some of this is part of a Keynesian policy to boost economic output, and economic output is closely related to military formidability (due to ability to afford raw resources and advanced technology for the military).
Hm, I guess you would see this as more closely related to bargaining power than to dominance, because in your model dominance is a human-psychology-thing and bargaining power isn’t restricted to voluntary transactions?
I am going to guess that the diff between you and John’s models here is that John thinks LDT/FDT solves this, and you don’t.
Good guess, but that’s not cruxy for me. Yes, LDT/FDT-style things are one possibility. But even if those fail, I still expect non-hierarchical coordination mechanisms among highly capable agents.
Gesturing more at where the intuition comes from: compare hierarchical management to markets, as a control mechanism. Markets require clean factorization—a production problem needs to be factored into production of standardized, verifiable intermediate goods in order for markets to handle the production pipeline well. If that can be done, then markets scale very well, they pass exactly the information and incentives people need (in the form of prices). Hierarchies, in contrast, scale very poorly. They provide basically-zero built-in mechanisms for passing the right information between agents, or for providing precise incentives to each agent. They’re the sort of thing which can work ok at small scale, where the person at the top can track everything going on everywhere, but quickly become extremely bottlenecked on the top person as you scale up. And you can see this pretty clearly at real-world companies: past a very small size, companies are usually extremely bottlenecked on the attention of top executives, because lower-level people lack the incentives/information to coordinate on their own across different parts of the company.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn’t expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn’t be possible for one mind to just do everything itself.)
On the other hand, while dominance relations scale very poorly as a coordination mechanism, they are algorithmically relatively simple. Thus my claim from the post that dominance seems like a hack for low-capability agents, and higher-capability agents will mostly rely on some other coordination mechanism.
My model probably mostly resembles this situation. Some ~singular AI will maintain a monopoly on violence. Maybe it will use all the resources in the solar system, leaving no space for anyone else. Alternatively (for instance if alignment succeeds), it will leave one or more sources of resources that other agents can use. If the dominant AI fully protects these smaller agents from each other, then they’ll handle their basic preferences and mostly withdraw into their own world, ending the hierarchy. If the dominant AI has some preference for who to favor, or leaves some options for aggression/exploitation which don’t threaten the dominant AI, then someone is going to win this fight, making the hierarchy repeat fractally down.
Main complication to this model is inertia; if human property rights are preserved well enough, then most resources would start out owned by humans, and it would take some time for the economy to equillibrate to the above.
Maybe, but I’m not sure it’s even necessary to invoke LDT/FDT/UDT, and instead argue that coordinating even through solely causal methods is very cheap for AIs to the point where coordination, and as a side effect, interfaces become quite a lot less of a bottleneck compared to today.
In essence, I think the diff between John’s models and tailcalled’s models is plausibly in how easy coordination in a more general sense can ever be for AIs, and whether AIs have much better ability to coordinate compared to humans today, where John thinks that coordination is a taut constraint for humans but not for AI, but tailcalled thinks it’s hard to coordinate for both AIs and humans due to fundamental limits.
LDT/FDT is a central example of rationalist-Gnostic heresy.