What exactly do you mean by “we are now in a fast takeoff”? (I wouldn’t say we’re in a fast takeoff until AI systems are substantially accelerating improvement in AI systems, which isn’t how I’d characterize the current situation.)
I might be abusing the phrase? We are in a “we should probably have short-timelines, we can see the writing on the wall of how these systems might be constructible” situation, but not in a literal “self-improvement” situation.
Eurisko was a super early project sort of along these lines, with “heuristics” built out of “heuristics” by “heuristics”.
The idea of “fast takeoff” imagines that meta learning about meta learning about meta learning might turn out to admit of many large and pragmatically important insights about “how to think real good” that human culture hasn’t serially reached yet, because our brains are slow and our culture accretes knowledge in fits and starts and little disasters each time a very knowledgeable genius dies.
This is distinct from “short timelines” because a lot of people have thought for the last 20 years that AGI might be 200 years away, or just impossible for humans, or something. For example… Andrew Ng is a famous idiot (who taught a MOOC on fancy statistics back in the day and then parlayed MOOC-based fame into a fancy title at a big tech company) who quipped that worrying about AI is like worrying about “overpopulation on Mars”. In conversations I had, almost everyone smart already knew that Andrew Ng was being an idiot here on the object level… (though maybe actually pretty smart at tricking people into talking about him?) but a lot of “only sorta smart” people thought it would be hubristic to just say that he was wrong, and so took it kind of on faith that he was “object level correct”, and didn’t expect researchers to make actual progress on actual AI.
But progress is happening pretty fast from what I can tell. Whether the relatively-shorter-timelines-than-widely-expected thing converts into a faster-takeoff-than-expected remains to be seen.
I hope not. I think think faster takeoffs convergently imply conflicts of interest, and that attempts to “do things before they can be blocked” would mean that something sorta like like “ambush tactics” were happening.
What exactly do you mean by “we are now in a fast takeoff”? (I wouldn’t say we’re in a fast takeoff until AI systems are substantially accelerating improvement in AI systems, which isn’t how I’d characterize the current situation.)
I might be abusing the phrase? We are in a “we should probably have short-timelines, we can see the writing on the wall of how these systems might be constructible” situation, but not in a literal “self-improvement” situation.
Is that called something different?
There have, over the decades, been plans for “self improving artificial general intelligence” where the AGI’s cleverness is aimed directly at improving the AGI’s cleverness, and the thought is that maybe this will amplify, like neutron cascades in fission or an epidemiological plague with sick people causing more sick people in a positive feedback loop.
Eurisko was a super early project sort of along these lines, with “heuristics” built out of “heuristics” by “heuristics”.
The idea of “fast takeoff” imagines that meta learning about meta learning about meta learning might turn out to admit of many large and pragmatically important insights about “how to think real good” that human culture hasn’t serially reached yet, because our brains are slow and our culture accretes knowledge in fits and starts and little disasters each time a very knowledgeable genius dies.
“Fast takeoff” is usually a hypothetical scenario where self-improvement that gets exponentially better turns out to be how the structure of possible thinking works, and something spend lots of serial steps (for months? or for hours in a big datacenter?) seeming to make “not very much progress” because it is maybe (to make an example up) precomputing cache lookups for generic patterns by which turing machines can be detected to have entered loops or not, and then in the last hour (or the last 10 seconds in a big datacenter) it just… does whatever it is that an optimal thinker would do to make the number go up.
This is distinct from “short timelines” because a lot of people have thought for the last 20 years that AGI might be 200 years away, or just impossible for humans, or something. For example… Andrew Ng is a famous idiot (who taught a MOOC on fancy statistics back in the day and then parlayed MOOC-based fame into a fancy title at a big tech company) who quipped that worrying about AI is like worrying about “overpopulation on Mars”. In conversations I had, almost everyone smart already knew that Andrew Ng was being an idiot here on the object level… (though maybe actually pretty smart at tricking people into talking about him?) but a lot of “only sorta smart” people thought it would be hubristic to just say that he was wrong, and so took it kind of on faith that he was “object level correct”, and didn’t expect researchers to make actual progress on actual AI.
But progress is happening pretty fast from what I can tell. Whether the relatively-shorter-timelines-than-widely-expected thing converts into a faster-takeoff-than-expected remains to be seen.
I hope not. I think think faster takeoffs convergently imply conflicts of interest, and that attempts to “do things before they can be blocked” would mean that something sorta like like “ambush tactics” were happening.