Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).
[...]
(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
Slow takeoff still can involve a singularity (aka an intelligence explosion).
The terms “fast/slow takeoff” are somewhat bad because they are often used to discuss two different questions:
How long does it take from the point where AI is seriously useful/important (e.g. results in 5% additional GDP growth per year in the US) to go to AIs which are much smarter than humans? (What people would normally think of as fast vs slow.)
Is takeoff discontinuous vs continuous?
And this explainer introduces a third idea:
Is takeoff exponential or does it have a singularity (hyperbolic growth)?
Random error:
Actually, this isn’t how people (in the AI safety community) generally use the term slow takeoff.
Quoting from the blog post by Paul:
Slow takeoff still can involve a singularity (aka an intelligence explosion).
The terms “fast/slow takeoff” are somewhat bad because they are often used to discuss two different questions:
How long does it take from the point where AI is seriously useful/important (e.g. results in 5% additional GDP growth per year in the US) to go to AIs which are much smarter than humans? (What people would normally think of as fast vs slow.)
Is takeoff discontinuous vs continuous?
And this explainer introduces a third idea:
Is takeoff exponential or does it have a singularity (hyperbolic growth)?