Distinguishing definitions of takeoff
I find discussions about AI takeoff to be very confusing. Often, people will argue for “slow takeoff” or “fast takeoff” and then when I ask them to operationalize what those terms mean, they end up saying something quite different than what I thought those terms meant.
To help alleviate this problem, I aim to compile the definitions of AI takeoff that I’m currently aware of, with an emphasis on definitions that have clear specifications. I will continue updating the post as long as I think it serves as a useful reference for others.
In this post, an AI takeoff can be roughly construed as “the dynamics of the world associated with the development of powerful artificial intelligence.” These definitions characterize different ways that the world can evolve as transformative AI is developed.
Foom/Hard takeoff
The traditional hard takeoff position, or “Foom” position (these appear to be equivalent terms) was characterized in this post from Eliezer Yudkowsky. It contrasts Hanson’s takeoff scenario by emphasizing local dynamics: rather than a population of artificial intelligences coming into existence, there would be a single intelligence that quickly reaches a level of competence that outstrips the world’s capabilities to control it. The proposed mechanism that causes such a dynamic is recursive self improvement, though Yudkowsky later suggested that this wasn’t necessary.
The ability for recursive self improvement to induce a hard takeoff was defended in Intelligence Explosion Microeconomics. He argues against Robin Hanson in the AI Foom debates. Watch this video to see the live debate.
Given the word “hard” in this notion of takeoff, a “soft” takeoff could simply be defined as the negation of a hard takeoff.
Hansonian “slow” takeoff
Robin Hanson objected to hard takeoff by predicting that growth in AI capabilities will not be extremely uneven between projects. In other words, there is unlikely to be one AI project, or even a small set of AI projects, that produces a system that outstrips the abilities of the rest of the world. While he rejects Yudkowsky’s argument, it is inaccurate to say that Robin Hanson expected growth in AI capabilities to be slow.
In Economic Growth Given Machine Intelligence, Hanson argues that AI induced growth could cause GDP to double on the timescale of months. Very high economic growth would mark a radical transition to a faster mode of technological progress and capabilities, something that Hanson argues is entirely precedented in human history.
The technology that Hanson envisions will induce fast economic growth is whole brain emulation, which he wrote a book about. In general, Hanson rejects the framework that AGI should be seen as an invention that occurs at a particular moment in time: instead, AI should be viewed as an input to the economy, (like electricity, though the considerations may be different).
Bostromian takeoffs
Nick Bostrom appeared to throw away much of the terminology in the AI Foom debate in order to invent his own. In Superintelligence he provides a characterization of three types of AI capability growth modes, defined by the clock-time (real physical time) from when a system is roughly human-level to when it is strongly superintelligent, defined as “a level of intelligence vastly greater than contemporary humanity’s combined intellectual wherewithal.”
Some have objected to Bostrom’s use of clock-time to define takeoff, instead arguing that work required to align systems is a better metric (though harder to measure).
Slow
A slow takeoff is one that occurs over the timescale of decades or centuries. Bostrom predicted that this timescale would allow for institutions, such as governments, to react to new AI developments. It would also allow for testing incrementally more powerful technologies without existential risks associated with testing.
Fast
A fast takeoff is one that occurs over the timescale of minutes, hours, or days. Given such short time to react, Bostrom believes that local dynamics of the takeoff become relevant, as was the case in Yudkowsky’s foom scenario.
Moderate
A moderate takeoff is situated between slow and fast, and occurs on the timescale of months or years.
Continuous takeoff
Continuous takeoff was defined, and partially defended in my post. Its meaning primarily derives from Katja Grace’s post on discontinuous progress around the development of AGI. In that post, Grace characterizes discontinuities:
We say a technological discontinuity has occurred when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. We measure the size of a discontinuity in terms of how many years of past progress would have been needed to produce the same improvement. We use judgment to decide how to extrapolate past progress.
In my post, I extrapolate this concept and invert it, using terminology that I saw Rohin use in this Alignment Newsletter edition, and define continuous takeoff as
A scenario where the development of competent, powerful AI follows a trajectory that is roughly in line with what we would have expected by extrapolating from past progress.
Gradual/incremental takeoff?
Some people objected to my use of the word continuous, as they found that the words gradual or incremental are more descriptive and mathematically accurate. After all, the following function is continuous, but not gradual.
Additionally, if you agree with Hanson’s thesis that history can be seen as a series of economic growth modes, each faster than the last one, then continuous takeoff as plainly defined is in trouble. That’s because technological progress from 1800 − 1900 was much faster than technological progress from 1700 − 1800. Therefore, “extrapolating from past progress” would provide an incorrect estimate of progress, if one did not foresee the industrial revolution. In general, extrapolating from past progress is hard because it depends on the reference class you are using to forecast.
Paul slow takeoff
Paul Christiano argues that we should characterize takeoff in terms of economic growth rates (similar to Hanson) but uses a definition that emphasizes how quickly the economy transitions into a period of higher growth. He defines slow takeoff as
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)
and defines fast takeoff as the negation of the above statement. Note that this definition leaves a third possibility: you could believe that the world output will never double during a 1 year interval, a position I would refer to as “no takeoff” which I explain next.
Paul’s outline of slow takeoff shares some of its meaning with continuous takeoff, because under a slow transition to a higher growth mode, change won’t be sudden.
No takeoff
“No takeoff” is essentially my term for the belief that world economic growth rates won’t accelerate to a very high level (perhaps >30% real GDP growth rate in one year) following the development of AI. William Macaskill is a notable skeptic of AI takeoff. I have created this Metaculus question to operationalize the thesis.
The Effective Altruism Foundation wrote this post suggesting that peak economic growth rates may lie in the past. If we use the outside view, this position may be reasonable. Economic growth rates have slowed down since the 1960s despite the rise of personal computers and the internet: technologies that we might have naively predicted would be transformative ahead of time.
This position should not be confused with the idea that humanity will never develop superintelligent computers, though that scenario is compatible with no takeoff.
Drexler’s takeoff
Eric Drexler argues in Comprehensive AI Services (CAIS) that future AI will be modular, meaning that there is unlikely to be a single system that can perform a set of diverse tasks all at once before there are individual systems that can perform the individual tasks more competently than the single system can. This idea shares groundwork with Hanson’s objection to a local takeoff. The reverse of this scenario is what Hanson calls “lumpy AI” where single agentic systems outcompete a set of services.
Drexler uses the CAIS model to argue against the binary characterization of self-improvement. Just as technology already feeds into itself, and thus the world can already be seen as “recursively self improving itself”, future AI research could feed into itself as recursive technological improvement, without the necessary focus on single systems improving themselves.
In other words, rather than viewing AIs as either self improving or not, self improvement can be seen as a continuum from “the entire world works to improve a system” on one end, and “a single local system improves only itself, with outside forces providing minimal benefit to growth in capabilities” on the other.
Baumann’s soft takeoff
In this post, Tobias Baumann argues that we should operationalize soft takeoff in terms of how quickly the fraction of global economic activity attributable to autonomous AI systems will rise. “Time” here is not necessarily clock-time, as was the case in Bostrom’s takeoff. Time can also refer to economic time, which is a measure of time that adjusts for rate of economic growth, and political time, a measure that adjusts for rate of social change.
He explains that this operationalization avoids the pitfalls of definitions that rely on moments in time where AI reaches thresholds such as “human-level” or “superintelligent.” He argues that AI is likely to surpass human abilities in some domains and not in others, rather than surpass us in all ways all at once.
Robin Hanson appears to agree with a similar measure for AI progress.
Less common definitions
Event Horizon/Epistemic Horizon
In 2007, Yudkowsky outlined the three schools of singularity, which was perhaps the state of the art for takeoff discussions at the time. In it he included his own scenario (Foom), the Event Horizon, and Accelerating Change.
The Event Horizon hypothesis could be seen as an extrapolation of Vernor Vinge’s definition of the technological singularity. It is defined as a point in time after which current models of future progress break down, which is essentially the opposite definition of continuous takeoff.
An epistemic horizon would be relevant for decision making because it would imply that AI progress could come suddenly, without warning. If this were true, then our safety guarantees assumed under a continuous takeoff scenario would fail. Furthermore, even if we could predict rapid change ahead of time, due to social pressures, people might fail to act until it’s too late, a position argued for in There’s No Fire Alarm for Artificial General Intelligence.
(Note, I see a lot of people interpreting the Fire Alarm essay as merely arguing that we can’t predict rapid progress before it’s too late. The essay itself dispels this interpretation, “When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.”)
Accelerating change
Continuing the discussion from the three schools of singularity, this version of AI takeoff is most closely associated with Ray Kurzweil. Accelerating change is characterized by AI capability trajectories following smooth exponential curves. It shares with continuous takeoff the predictability of AI developments, but is more narrow and makes much more specific predictions.
Individual vs. collective takeoff
Kaj Sotala has used the words “individual takeoff” vs. “collective takeoff” which I think are roughly synonymous with the local vs. global distinction provided by the Foom debate. Other words that often come up are “distributed” and “diffuse”, “unipolar” vs “multipolar”, and “decisive strategic advantage.”
Goertzel’s semihard takeoff
I can’t say much about this one except that it’s in-between soft and hard takeoff.
Further reading
A Contra Foom Reading List and Reflections on Intelligence from Magnus Vinding
Self-improving AI: an Analysis, from John Storrs Hall
How sure are we about this AI stuff?, from Ben Garfinkel
Can We Avoid a Hard Takeoff from Vernor Vinge
- How to make the best of the most important century? by 14 Sep 2021 21:05 UTC; 54 points) (EA Forum;
- Preface to the sequence on economic growth by 27 Aug 2020 20:29 UTC; 51 points) (
- Is “Recursive Self-Improvement” Relevant in the Deep Learning Paradigm? by 6 Apr 2023 7:13 UTC; 32 points) (
- Poll: Which variables are most strategically relevant? by 22 Jan 2021 17:17 UTC; 32 points) (
- Digital Minds Takeoff Scenarios by 5 Jul 2024 16:06 UTC; 31 points) (EA Forum;
- For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines. by 21 Apr 2022 7:44 UTC; 31 points) (
- More on disambiguating “discontinuity” by 9 Jun 2020 15:16 UTC; 16 points) (
- [AN #91]: Concepts, implementations, problems, and a benchmark for impact measurement by 18 Mar 2020 17:10 UTC; 15 points) (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:19 UTC; 14 points) (
- Poll: Which variables are most strategically relevant? by 22 Jan 2021 17:17 UTC; 14 points) (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:16 UTC; 12 points) (EA Forum;
- 23 Jun 2020 19:20 UTC; 12 points) 's comment on Modelling Continuous Progress by (
- AI Safety 101 : AGI by 21 Dec 2023 14:18 UTC; 2 points) (EA Forum;
trying to put this in my own words to remember it
so different axes for take-off dynamics include:
time span: physical time, economic time, political time, AI time (development speed of front runners over others)
shape of the take-off curve, ex.: exponential, S-curves, linear, etc.
monopolistic effect: do front runners become less likely to be outcompeted as they grow? how many large players will there be? also: how will this change? ex.: it could be that AI doesn’t have strong monopolistic effect until you reach a certain level
measurement to quantify:
AI progress: GDP, decisive strategic advantage
AI progress speed: time until AI having more power than the rest of humanity / time until solving the control problem
related:
impact on forecasting capabilities
This is a fantastic set of definitions, and it is definitely useful. That said, I want to add something to what you said near the end. I think the penultimate point needs further elaboration. I’ve spoken about “multi-agent Goodhart” in other contexts, and discussed why I think it’s a fundamentally hard problem, but I don’t think I’ve really clarified how I think this relates to alignment and takeoff. I’ll try to do that below.
Essentially, I think that the question of multipolarity versus individual or collective takeoff is critical, as (to me) it is the most worrying scenario for alignment.
Individual or collective vs. Multipolar takeoff
Individual takeoff implies that a coherent, agentic system is being improved or accelerating, where takeoff could be defined by either economic growth, where a single company or system accounts for a majority of humanity’s economic output, or otherwise be a foom or similar scenario. Collective takeoff would imply that a set of agentic systems are accelerating in ways that are (in the short term) non-competitive. If humanity as a whole does benefit widely from greatly increased economic growth, at some point even doubling in a year, yet there is no single dominant system, this would be a collective takeoff.
Multipolar takeoff, however, is a scenario where systems are actively competing in some domain. It seems plausible that competition of this sort would provide incentives for rapid improvement that could impact even non-agentic systems like Drexler’s CAIS. Alternatively, or additionally, improvement could be enabled by feedback from competition with peer or near-peer systems. (This seems to be the way humans developed intelligence, and so it seems a-priori worrying.) In either case, this type of takeoff could involve zero or negative sum interaction between systems. If a single winner emerged quickly enough to prevent destructive competition, it would be the “evolutionary” winner, with goals being aligned with success. For that reason, it seems implausible to me that it would be aligned with humanity’s interests as a whole. If no winner emerged, it seems that convergent instrumental goals combined with rapidly increasing capabilities would lead to at best a Hansonian Em-scenario, where systems respect property and other rights, but all available resources would be directed towards competition, and systems would be expanded to take over resources until the marginal cost of expansion equals marginal benefit. It seems implausible that in a takeoff scenario, competition reaching this point would leave significant resources for the remainder of humanity, likely at least wasting our cosmic endowment. If the competition turned negative sum, there could be even faster races to the bottom, leading to worse consequences.
Planned summary for the Alignment Newsletter:
Planned opinion:
So do I. So thanks a lot for this summary!
This might be interesting to compare against how models of the stock market have changed over time. (Its particular relationship with statistics may be illuminating.)
I thought a bit about this, but haven’t figured it out: how can this be measured? if AI is commoditized, AI companies won’t make a profit from it. AI researchers might make more money, but likely not more than however much it would cost to train more AI researchers (or something like that). maybe we can see which industries have their price reduced because of AI, and count this as a lower bound for the consumer surplus created by AI. what else?