As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement. It is true that the smoothness so far has been mildly surprising to me, but it’s really not what most of the historical “slow” vs. “fast” debate has been about, and I don’t really know anyone who made particularly strong predictions here.
I personally would be open to betting (though because of doomsday correlations figuring out the details will probably be hard) that the central predictions in Paul’s “slow vs. fast” takeoff post will indeed not turn out to be true (I am not like super confident, but would take a 2:1 bet with a good operationalization):
I expect “slow takeoff,” which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval.
It currently indeed looks like AI will not be particularly transformative before it becomes extremely powerful. Scaling is happening much faster than economic value is being produced by AI, and especially as we get AI automated R&D, which I expect to happen relatively soon, that trend will get more dramatic.
As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement.
Yeah, I agree that we are seeing a tiny bit of that happening.
Commenting a bit on the exact links you shared: The Alphachip stuff seems overstated from what I’ve heard from other people working in the space, “code being written by AI” is not a great proxy for AI doing AI R&D, and generating synthetic training data is a pretty narrow edge-case of AI R&D (though yeah, it does matter and is a substantial part for why I don’t expect a training data bottleneck contrary to what many people have been forecasting).
I have a hard time imagining there’s a magical threshold where we go from “AI is automating 99.99% of my work” to “AI is automating 100% of my work” and things suddenly go Foom (unless it’s for some other reason like “the AI built a nanobot swarm and turned the planet into computronium”). As it is, I would guess we are closer to “AI is automating 20% of my work” than “AI is automating 1% of my work”
It’s of course all a matter of degree. The concrete prediction Paul made was “doubling in 4 years before we see a doubling in 1 year”. I would currently be surprised (though not very surprised) if we see the world economy doubling at all before you get much faster growth (probably by taking humans out the loop completely).
As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement. It is true that the smoothness so far has been mildly surprising to me, but it’s really not what most of the historical “slow” vs. “fast” debate has been about, and I don’t really know anyone who made particularly strong predictions here.
I personally would be open to betting (though because of doomsday correlations figuring out the details will probably be hard) that the central predictions in Paul’s “slow vs. fast” takeoff post will indeed not turn out to be true (I am not like super confident, but would take a 2:1 bet with a good operationalization):
It currently indeed looks like AI will not be particularly transformative before it becomes extremely powerful. Scaling is happening much faster than economic value is being produced by AI, and especially as we get AI automated R&D, which I expect to happen relatively soon, that trend will get more dramatic.
AI is currently doing AI R&D.
Yeah, I agree that we are seeing a tiny bit of that happening.
Commenting a bit on the exact links you shared: The Alphachip stuff seems overstated from what I’ve heard from other people working in the space, “code being written by AI” is not a great proxy for AI doing AI R&D, and generating synthetic training data is a pretty narrow edge-case of AI R&D (though yeah, it does matter and is a substantial part for why I don’t expect a training data bottleneck contrary to what many people have been forecasting).
I have a hard time imagining there’s a magical threshold where we go from “AI is automating 99.99% of my work” to “AI is automating 100% of my work” and things suddenly go Foom (unless it’s for some other reason like “the AI built a nanobot swarm and turned the planet into computronium”). As it is, I would guess we are closer to “AI is automating 20% of my work” than “AI is automating 1% of my work”
It’s of course all a matter of degree. The concrete prediction Paul made was “doubling in 4 years before we see a doubling in 1 year”. I would currently be surprised (though not very surprised) if we see the world economy doubling at all before you get much faster growth (probably by taking humans out the loop completely).