If coordination ability increases incrementally over time, then we should see a gradual increase in the concentration of AI agency over time, rather than the sudden emergence of a single unified agent. To the extent this concentration happens incrementally, it will be predictable, the potential harms will be noticeable before getting too extreme, and we can take measures to pull back if we realize that the costs of continually increasing coordination abilities are too high. In my opinion, this makes the challenge here dramatically easier.
(I’ll add that paragraph to the outline, so that other people can understand what I’m saying)
I’ll also quote from a comment I wrote yesterday, which adds more context to this argument,
“Ability to coordinate” is continuous, and will likely increase incrementally over time
Different AIs will likely have different abilities to coordinate with each other
Some AIs will eventually be much better at coordination amongst each other than humans can coordinate amongst each other
However, I don’t think this happens automatically as a result of AIs getting more intelligent than humans
The moment during which we hand over control of the world to AIs will likely occur at a point when the ability for AIs to coordinate is somewhere only modestly above human-level (and very far below perfect).
As a result, humans don’t need to solve the problem of “What if a set of AIs form a unified coalition because they can flawlessly coordinate?” since that problem won’t happen while humans are still in charge
Systems of laws, peaceable compromise and trade emerge relatively robustly in cases in which there are agents of varying levels of power, with separate values, and they need mechanisms to facilitate the satisfaction of their separate values
One reason for this is that working within a system of law is routinely more efficient than going to war with other people, even if you are very powerful
The existence of a subset of agents that can coordinate better amongst themselves than they can with other agents doesn’t necessarily undermine the legal system in a major way, at least in the sense of causing the system to fall apart in a coup or revolution.
If coordination ability increases incrementally over time, then we should see a gradual increase in the concentration of AI agency over time, rather than the sudden emergence of a single unified agent. To the extent this concentration happens incrementally, it will be predictable, the potential harms will be noticeable before getting too extreme, and we can take measures to pull back if we realize that the costs of continually increasing coordination abilities are too high. In my opinion, this makes the challenge here dramatically easier.
(I’ll add that paragraph to the outline, so that other people can understand what I’m saying)
I’ll also quote from a comment I wrote yesterday, which adds more context to this argument,