It now seems clear that AIs will also descend more directly from a common ancestor than you might have naively expected in the CAIS model, since almost every AI will be a modified version of one of only a few base foundation models. That has important safety implications, since problems in the base model might carry over to problems in the downstream models, which will be spread thorought the economy. That said, the fact that foundation model development will be highly centralized, and thus controllable, is perhaps a safety bonus that loosely cancels out this consideration.
The first point here (that problems in a widely-used base model will propagate widely) concerns me as well. From distributed systems we know that
Individual components will fail.
To withstand failures of components, use redundancy and reduce the correlation of failures.
By point 1, we should expect alignment failures. (It’s not so different from bugs and design flaws in software systems, which are inevitable.) By point 2, we can withstand them using redundancy, but only if the failures are sufficiently uncorrelated. Unfortunately, the tendency towards monopolies in base models is increasing the correlation of failures.
As a concrete example, consider AI controlling a military. (As AI improves, there are increasingly strong incentives to do so.) If such a system were to have a bug causing it to enact a military coup, it would (if successful) have seized control of the government from humans. We know from history that successful military coups have happened many times, so this does not require any special properties of AI.
Such a scenario could be prevented by populating the military with multiple AI systems with decorrelated failures. But to do that, we’d need such systems to actually be available.
It seems to me the main problem is the natural tendency to monopoly in technology. The preferable alternative is robust competition of several proprietary and open source options, and that might need government support. (Unfortunately, it seems that many safety-concerned people believe that competition and open source are bad, which I view as misguided for the above reasons.)
There are other issues that cause immense monopolistic behavior. (Note that even in open source, “Linux” is closer to a monoculture than not due to shared critical components such as drivers and the kernel)
High cost to train any large AI system
High cost to validate it on real world tasks
At any given time, especially for real world tasks, some systems will be measurably better
Tool chain. The ecosystem around models is at least as monopolistic if not more than the models themselves. Probably much more. There are immensely complicated cloud based stacks you need, you need realtime components so models can be used for robotics, you need simulators and a pathway to get legal approval and common infrastructure of all sorts. All of which is enormously difficult and expensive to write, it’s a natural Monopoly unless the dominate player imposes unreasonable rules or supplies poor quality software. (See Apple and Microsoft)
Pricing. Monopolies can charge a price low enough no competition can break even.
The first point here (that problems in a widely-used base model will propagate widely) concerns me as well. From distributed systems we know that
Individual components will fail.
To withstand failures of components, use redundancy and reduce the correlation of failures.
By point 1, we should expect alignment failures. (It’s not so different from bugs and design flaws in software systems, which are inevitable.) By point 2, we can withstand them using redundancy, but only if the failures are sufficiently uncorrelated. Unfortunately, the tendency towards monopolies in base models is increasing the correlation of failures.
As a concrete example, consider AI controlling a military. (As AI improves, there are increasingly strong incentives to do so.) If such a system were to have a bug causing it to enact a military coup, it would (if successful) have seized control of the government from humans. We know from history that successful military coups have happened many times, so this does not require any special properties of AI.
Such a scenario could be prevented by populating the military with multiple AI systems with decorrelated failures. But to do that, we’d need such systems to actually be available.
It seems to me the main problem is the natural tendency to monopoly in technology. The preferable alternative is robust competition of several proprietary and open source options, and that might need government support. (Unfortunately, it seems that many safety-concerned people believe that competition and open source are bad, which I view as misguided for the above reasons.)
There are other issues that cause immense monopolistic behavior. (Note that even in open source, “Linux” is closer to a monoculture than not due to shared critical components such as drivers and the kernel)
High cost to train any large AI system
High cost to validate it on real world tasks
At any given time, especially for real world tasks, some systems will be measurably better
Tool chain. The ecosystem around models is at least as monopolistic if not more than the models themselves. Probably much more. There are immensely complicated cloud based stacks you need, you need realtime components so models can be used for robotics, you need simulators and a pathway to get legal approval and common infrastructure of all sorts. All of which is enormously difficult and expensive to write, it’s a natural Monopoly unless the dominate player imposes unreasonable rules or supplies poor quality software. (See Apple and Microsoft)
Pricing. Monopolies can charge a price low enough no competition can break even.