I’m finding it hard to see how we could get (1) without some discontinuity?
When I think about why (1) would be true, the argument that comes to mind is that single AI systems will be extremely expensive to deploy, which means that only a few very rich entities could own them. However, this would contradict the general trend of ML being hard to train and easy to deploy. Unlike, say nukes, once you’ve trained your AI you can create a lot of copies and distribute them widely.
Re whether ML is easy to deploy: most compute these days goes into deployment. And there are a lot of other deployment challenges that you don’t have during training where you train a single model under lab conditions.
I’m finding it hard to see how we could get (1) without some discontinuity?
When I think about why (1) would be true, the argument that comes to mind is that single AI systems will be extremely expensive to deploy, which means that only a few very rich entities could own them. However, this would contradict the general trend of ML being hard to train and easy to deploy. Unlike, say nukes, once you’ve trained your AI you can create a lot of copies and distribute them widely.
Re whether ML is easy to deploy: most compute these days goes into deployment. And there are a lot of other deployment challenges that you don’t have during training where you train a single model under lab conditions.
I agree with this, but when I said deployment I meant deployment of a single system, not several.
Fair—I’d probably count “making lots of copies of a trained system” as a single system here.
I’m confused about why (1) and (2) are separate scenarios then. Perhaps because in (2) there’s a lot of different types of AIs?
Yes. To the extent that the system in question is an agent, I’d roughly think of many copies of it as a single distributed agent.