Although AI progress is occurring gradually right now where regulation can keep up, I do think a hard takeoff is still a possibility.
My understanding is that fast recursive self-improvement occurs once there is a closed loop of fully autonomous self-improving AI. AI is not capable enough for that yet and most of the important aspects of AI research are still done by humans but it could become a possibility in the future once AI agents are advanced and reliable enough.
In the future before an intelligence explosion, there could be a lot of regulation and awareness of AI relative to today. But if there’s a fast takeoff, regulation would be unable to keep up with AI progress.
Although AI progress is occurring gradually right now where regulation can keep up, I do think a hard takeoff is still a possibility.
My understanding is that fast recursive self-improvement occurs once there is a closed loop of fully autonomous self-improving AI. AI is not capable enough for that yet and most of the important aspects of AI research are still done by humans but it could become a possibility in the future once AI agents are advanced and reliable enough.
In the future before an intelligence explosion, there could be a lot of regulation and awareness of AI relative to today. But if there’s a fast takeoff, regulation would be unable to keep up with AI progress.