If this saturates, it would probably saturate very far above human level...
Foom is a much stronger claim than this. It’s saying that there will be an incredibly fast, localized intelligence explosion involving literally one single AI system improving itself. Your scenario of an “ecosystem” of independent AI researchers working together sounds more like the “slow” takeoff of Christiano or Hanson than EY-style fast takeoff.
That depends on the dynamics, not on whether it is localized or distributed. E.g. if it includes a take-over of a large part of Internet, it will end up very distributed, so presumably a successful foom will get more distributed as it unfolds… But initially, a company will have it on its own local cluster, so it might be fairly localized for a while, depending on how they structure it...
(The monolithic abstractions, like a “singleton”, are very questionable. Even a single human is fruitfully decomposed into a “society of minds” following Minsky. It might look “monolithic” or a “singleton” from the outside, but it will have all kinds of non-trivial internal dynamics, internal discourse, internal disagreements, and so on; this rich internal structure might be somewhat observable from the outside, or might be hidden.)
The real uncertainty is time: what timeframe for an “intelligence explosition” people are ready to call “foom” vs slow takeoff? https://www.lesswrong.com/tag/ai-takeoff makes a choice of putting this boundary between months and years:
A soft takeoff refers to an AGI that would self-improve over a period of years or decades.
A hard takeoff (or an AI going “FOOM” [2]) refers to AGI expansion in a matter of minutes, days, or months.
I certainly don’t think the scheme I described would work in minutes, I am less certain about days, and I am mostly thinking in terms of weeks (months do feel a bit too long to me, although who knows).
Foom is a much stronger claim than this. It’s saying that there will be an incredibly fast, localized intelligence explosion involving literally one single AI system improving itself. Your scenario of an “ecosystem” of independent AI researchers working together sounds more like the “slow” takeoff of Christiano or Hanson than EY-style fast takeoff.
That depends on the dynamics, not on whether it is localized or distributed. E.g. if it includes a take-over of a large part of Internet, it will end up very distributed, so presumably a successful foom will get more distributed as it unfolds… But initially, a company will have it on its own local cluster, so it might be fairly localized for a while, depending on how they structure it...
(The monolithic abstractions, like a “singleton”, are very questionable. Even a single human is fruitfully decomposed into a “society of minds” following Minsky. It might look “monolithic” or a “singleton” from the outside, but it will have all kinds of non-trivial internal dynamics, internal discourse, internal disagreements, and so on; this rich internal structure might be somewhat observable from the outside, or might be hidden.)
The real uncertainty is time: what timeframe for an “intelligence explosition” people are ready to call “foom” vs slow takeoff? https://www.lesswrong.com/tag/ai-takeoff makes a choice of putting this boundary between months and years:
I certainly don’t think the scheme I described would work in minutes, I am less certain about days, and I am mostly thinking in terms of weeks (months do feel a bit too long to me, although who knows).