The equivalent of the “foom scenario” for CAIS would be rapidly improving basic AI capabilities due to automated AI R&D services, such that the aggregate “soup of services” is quickly able to do more and more complex tasks with constantly improving performance. If you look at the “soup” as an aggregate, this looks like a thing that is quickly becoming superintelligent by self-improving.
The main difference from the classical AI foom scenario is that the thing that’s improving cannot easily be modeled as pursuing a single goal. Also, there are more safety affordances: there can still be humans in the loop for services that have large real world consequences, you can monitor the interactions between services to make sure they aren’t doing anything unexpected, etc.
Isn’t the “foom scenario” referring to an individual AI that quickly gains ASI status by self-improving?
The equivalent of the “foom scenario” for CAIS would be rapidly improving basic AI capabilities due to automated AI R&D services, such that the aggregate “soup of services” is quickly able to do more and more complex tasks with constantly improving performance. If you look at the “soup” as an aggregate, this looks like a thing that is quickly becoming superintelligent by self-improving.
The main difference from the classical AI foom scenario is that the thing that’s improving cannot easily be modeled as pursuing a single goal. Also, there are more safety affordances: there can still be humans in the loop for services that have large real world consequences, you can monitor the interactions between services to make sure they aren’t doing anything unexpected, etc.