I think a problem with all the proposed terms is that they are all binaries, and one bit of information is far too little to characterize takeoff:
One person’s “slow” is >10 years, another’s is >6 months.
The beginning and end points are super unclear; some people might want to put the end point near the limits of intelligence, some people might want to put the beginning points at >2x AI R&D speed, some at 10, etc.
In general, a good description of takeoff should characterize capabilities at each point on the curve.
So I don’t really think that any of the binaries are all that useful for thinking or communicating about takeoff. I don’t have a great ontology for thinking about takeoff myself to suggest instead, but I generally try to in communication just define a start and end point and then say quantitatively how long this might take. One of the central ones I really care about is the time between wakeup and takeover capable AIs.
wakeup = “the first period in time when AIs are sufficiently capable that senior government people wake up to incoming AGI and ASI”
takeover capable AIs = “the first time there is a set of AI systems that are coordinating together and could take over the world if they wanted to”
The reason to think about this period is that (kind of by construction) it’s the time where unprecedented government actions that matter could happen. And so when planning for that sort of thing this length of time really matters.
Of course, the start and end times I think about are both fairly vague. They also aren’t purely a function of AI capabilities, and they care about stuff like “who is in government” and “how capable our institutions are at fighting a rogue AGI”. Also, many people believe that we never will get takeover capable AIs even at superintelligence.
I think in most cases it might make sense to give the unit you expect to measure it in. “Days-long takeoff”. “Months-long takeoff.” “Years-long-takeoff”. “Decades-long takeoff”.
[By comparison, I forget the reference but there is a paper estimating how quickly a computer virus could destroy most of the Internet. About 15 minutes, if I recall correctly.]
Fwiw I feel fine, with both slow/fast and smooth/sharp thinking of it as a continuum. Takeoffs and timelines can be slower or faster and compared on that axis.
I agree if you are just treating those as booleans your gonna get confused, but the words seem about as scalar a shorthand as one could hope for without literally switching entirely to more explicit quantification.
I think a problem with all the proposed terms is that they are all binaries, and one bit of information is far too little to characterize takeoff:
One person’s “slow” is >10 years, another’s is >6 months.
The beginning and end points are super unclear; some people might want to put the end point near the limits of intelligence, some people might want to put the beginning points at >2x AI R&D speed, some at 10, etc.
In general, a good description of takeoff should characterize capabilities at each point on the curve.
So I don’t really think that any of the binaries are all that useful for thinking or communicating about takeoff. I don’t have a great ontology for thinking about takeoff myself to suggest instead, but I generally try to in communication just define a start and end point and then say quantitatively how long this might take. One of the central ones I really care about is the time between wakeup and takeover capable AIs.
wakeup = “the first period in time when AIs are sufficiently capable that senior government people wake up to incoming AGI and ASI”
takeover capable AIs = “the first time there is a set of AI systems that are coordinating together and could take over the world if they wanted to”
The reason to think about this period is that (kind of by construction) it’s the time where unprecedented government actions that matter could happen. And so when planning for that sort of thing this length of time really matters.
Of course, the start and end times I think about are both fairly vague. They also aren’t purely a function of AI capabilities, and they care about stuff like “who is in government” and “how capable our institutions are at fighting a rogue AGI”. Also, many people believe that we never will get takeover capable AIs even at superintelligence.
I support replacing binary terms with quantitative terms.
I think in most cases it might make sense to give the unit you expect to measure it in. “Days-long takeoff”. “Months-long takeoff.” “Years-long-takeoff”. “Decades-long takeoff”.
Minutes long takeoff...
[By comparison, I forget the reference but there is a paper estimating how quickly a computer virus could destroy most of the Internet. About 15 minutes, if I recall correctly.]
(This bit isn’t serious) “i mean, a days long takeoff leaves you will loads of time for the hypersonic missiles to destroy all of Meta’s datacenters.”
serious answer that is agnostic as to how you are responding:
only if you know the takeoff is happening
Fwiw I feel fine, with both slow/fast and smooth/sharp thinking of it as a continuum. Takeoffs and timelines can be slower or faster and compared on that axis.
I agree if you are just treating those as booleans your gonna get confused, but the words seem about as scalar a shorthand as one could hope for without literally switching entirely to more explicit quantification.