I’m sorry you happened to be the one to trigger this (I don’t think it’s your fault in particular), but, I think it is fucking stupid to have stuck with “slow” takeoff for historical reasons. It has been confusing the whole time. I complained loudly about it on Paul’s Arguments about Fast Takeoff post, which was the first time it even occurred to me to interpret “slow takeoff” to mean “actually, even faster than fast takeoff, just, smoother”. I’m annoyed people kept using it.
If you wanna call it continuous vs discontinuous takeoff, fine, but then actually do that. I think realistically it’s too many syllables for people to reliably use and I’m annoyed people kept using slow takeoff in this way, which I think has been reliably misinterpreted and conflated with longer timelines this whole time. I proposed “Smooth vs Sharp” takeoff when the distinction was first made clear, I think “soft” vs “hard” takeoff is also reasonable.
To be fair, I had intended to write a post complaining about this three years ago and didn’t get around to it at the time, but, I dunno this still seems like an obvious failure mode people shouldn’t have fallen into.
I think the good news here is that the “slow” vs “fast” debate has been largely won (by slow takeoff), so the only people the terrible naming really affects is us nerds arguing on LessWrong.
As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement. It is true that the smoothness so far has been mildly surprising to me, but it’s really not what most of the historical “slow” vs. “fast” debate has been about, and I don’t really know anyone who made particularly strong predictions here.
I personally would be open to betting (though because of doomsday correlations figuring out the details will probably be hard) that the central predictions in Paul’s “slow vs. fast” takeoff post will indeed not turn out to be true (I am not like super confident, but would take a 2:1 bet with a good operationalization):
I expect “slow takeoff,” which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval.
It currently indeed looks like AI will not be particularly transformative before it becomes extremely powerful. Scaling is happening much faster than economic value is being produced by AI, and especially as we get AI automated R&D, which I expect to happen relatively soon, that trend will get more dramatic.
As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement.
Yeah, I agree that we are seeing a tiny bit of that happening.
Commenting a bit on the exact links you shared: The Alphachip stuff seems overstated from what I’ve heard from other people working in the space, “code being written by AI” is not a great proxy for AI doing AI R&D, and generating synthetic training data is a pretty narrow edge-case of AI R&D (though yeah, it does matter and is a substantial part for why I don’t expect a training data bottleneck contrary to what many people have been forecasting).
I have a hard time imagining there’s a magical threshold where we go from “AI is automating 99.99% of my work” to “AI is automating 100% of my work” and things suddenly go Foom (unless it’s for some other reason like “the AI built a nanobot swarm and turned the planet into computronium”). As it is, I would guess we are closer to “AI is automating 20% of my work” than “AI is automating 1% of my work”
It’s of course all a matter of degree. The concrete prediction Paul made was “doubling in 4 years before we see a doubling in 1 year”. I would currently be surprised (though not very surprised) if we see the world economy doubling at all before you get much faster growth (probably by taking humans out the loop completely).
Why do you think this? Recursive self-improvement isn’t possible yet, so from my perspective it doesn’t seem like we’ve encountered much evidence either way about how fast it might scale.
FWIW I do think we are clearly in a different strategic world than the one I think most people were imagining in 2010. I agree we still have not hit the point where we’re seeing how sharp the RSI curve will be, but, we are clearly seeing that there will be some kind of significant AI presence in the world by the time RSI hits, and it’d be surprising if that didn’t have some kind of strategic implication.
Huh, this doesn’t seem clear to me. It’s tricky to debate what people used to be imagining, especially on topics where those people were talking past each other this much, but my impression was that the fast/discontinuous argument was that rapid, human-mostly-or-entirely-out-of-the-loop recursive self-improvement seemed plausible—not that earlier, non-self-improving systems wouldn’t be useful.
I agree that nobody was making a specific claim that there wouldn’t be any kind of AI driven R&D pre-fast-takeoff. But, I think if Eliezer et al hadn’t been at least implicitly imagining less of this, there would have been at least a bit less talking-past-each-other in the debates with Paul.
I claim the phrasing in your first comment (“significant AI presence”) and your second (“AI driven R&D”) are pretty different—from my perspective, the former doesn’t bear much on this argument, while the latter does. But I think little of the progress so far has resulted from AI-driven R&D?
There is a ton of current AI research that would be impossible without existing AI (mostly generating synthetic data to train models). It seems likely that almost all aspects of AI research (chip design, model design, data curation) will follow this trend.
Are there any specific areas in which you would predict “when AGI is achieved, the best results on topic X will have little-to-no influence from AI”?
Well the point of saying “significant AI presence” was “it will have mattered”. I think that includes AI driven R&D. (It also includes things like “are the first AIs plugged into systems they get a lot of opportunity to manipulate from an early stage” and “the first AI is in a more multipolar-ish scenario and doesn’t get decisive strategic advantage.”)
I agree we haven’t seen much AI driven R&D yet (although I think there’s been at least slight coding speedups from pre-o1 copilot, like 5% or 10%, and I think o1 is on track to be fairly significant, and I expect to start seeing more meaningful AI-driven R&D within a year or so).
[edit: Logan’s argument about synthetic data was compelling to me at least at first glance, although I don’t know a ton about it and can imagine learning more and changing my mind again]
I think this is going to keep having terrible consequences for the people (such as the public and more importantly policymakers) being confused, taking a longer time to grasp what is going on, and possibly engaging with the entire situation via wrong, misleading frames.
Probably true for most of the public, but I bet pretty strongly against “there are zero important/influential people who read LessWrong, or who read papers or essays that mirror LessWrong terminology, who will end up being confused about this in a way that matters.”
Meanwhile, I see almost zero reason to keep using fast/slow – I think it will continue to confuse newcomers to the debate on LessWrong, and I think if you swap in either smooth/sharp or soft/hard there will approximately no confusion. I agree there is some annoyance/friction in remembering to use the new terms, but I don’t think this adds up to much – I think the amount of downstream confusion and wasted time from any given LW post or paper using the term will already outweigh the cost to the author.
(fyi I’m going to try to write some subtantive thoughts on your actual point, and suggest moving this discussion to this post if you are moved to argue more about it)
I’m sorry you happened to be the one to trigger this (I don’t think it’s your fault in particular), but, I think it is fucking stupid to have stuck with “slow” takeoff for historical reasons. It has been confusing the whole time. I complained loudly about it on Paul’s Arguments about Fast Takeoff post, which was the first time it even occurred to me to interpret “slow takeoff” to mean “actually, even faster than fast takeoff, just, smoother”. I’m annoyed people kept using it.
If you wanna call it continuous vs discontinuous takeoff, fine, but then actually do that. I think realistically it’s too many syllables for people to reliably use and I’m annoyed people kept using slow takeoff in this way, which I think has been reliably misinterpreted and conflated with longer timelines this whole time. I proposed “Smooth vs Sharp” takeoff when the distinction was first made clear, I think “soft” vs “hard” takeoff is also reasonable.
To be fair, I had intended to write a post complaining about this three years ago and didn’t get around to it at the time, but, I dunno this still seems like an obvious failure mode people shouldn’t have fallen into.
I think the good news here is that the “slow” vs “fast” debate has been largely won (by slow takeoff), so the only people the terrible naming really affects is us nerds arguing on LessWrong.
As I will reiterate probably for the thousandth time in these discussions, the point where anyone expected things to start happening quickly and discontinuously is when AGI gets competent enough to do AI R&D and perform recursive self-improvement. It is true that the smoothness so far has been mildly surprising to me, but it’s really not what most of the historical “slow” vs. “fast” debate has been about, and I don’t really know anyone who made particularly strong predictions here.
I personally would be open to betting (though because of doomsday correlations figuring out the details will probably be hard) that the central predictions in Paul’s “slow vs. fast” takeoff post will indeed not turn out to be true (I am not like super confident, but would take a 2:1 bet with a good operationalization):
It currently indeed looks like AI will not be particularly transformative before it becomes extremely powerful. Scaling is happening much faster than economic value is being produced by AI, and especially as we get AI automated R&D, which I expect to happen relatively soon, that trend will get more dramatic.
AI is currently doing AI R&D.
Yeah, I agree that we are seeing a tiny bit of that happening.
Commenting a bit on the exact links you shared: The Alphachip stuff seems overstated from what I’ve heard from other people working in the space, “code being written by AI” is not a great proxy for AI doing AI R&D, and generating synthetic training data is a pretty narrow edge-case of AI R&D (though yeah, it does matter and is a substantial part for why I don’t expect a training data bottleneck contrary to what many people have been forecasting).
I have a hard time imagining there’s a magical threshold where we go from “AI is automating 99.99% of my work” to “AI is automating 100% of my work” and things suddenly go Foom (unless it’s for some other reason like “the AI built a nanobot swarm and turned the planet into computronium”). As it is, I would guess we are closer to “AI is automating 20% of my work” than “AI is automating 1% of my work”
It’s of course all a matter of degree. The concrete prediction Paul made was “doubling in 4 years before we see a doubling in 1 year”. I would currently be surprised (though not very surprised) if we see the world economy doubling at all before you get much faster growth (probably by taking humans out the loop completely).
Why do you think this? Recursive self-improvement isn’t possible yet, so from my perspective it doesn’t seem like we’ve encountered much evidence either way about how fast it might scale.
FWIW I do think we are clearly in a different strategic world than the one I think most people were imagining in 2010. I agree we still have not hit the point where we’re seeing how sharp the RSI curve will be, but, we are clearly seeing that there will be some kind of significant AI presence in the world by the time RSI hits, and it’d be surprising if that didn’t have some kind of strategic implication.
Huh, this doesn’t seem clear to me. It’s tricky to debate what people used to be imagining, especially on topics where those people were talking past each other this much, but my impression was that the fast/discontinuous argument was that rapid, human-mostly-or-entirely-out-of-the-loop recursive self-improvement seemed plausible—not that earlier, non-self-improving systems wouldn’t be useful.
I agree that nobody was making a specific claim that there wouldn’t be any kind of AI driven R&D pre-fast-takeoff. But, I think if Eliezer et al hadn’t been at least implicitly imagining less of this, there would have been at least a bit less talking-past-each-other in the debates with Paul.
I claim the phrasing in your first comment (“significant AI presence”) and your second (“AI driven R&D”) are pretty different—from my perspective, the former doesn’t bear much on this argument, while the latter does. But I think little of the progress so far has resulted from AI-driven R&D?
There is a ton of current AI research that would be impossible without existing AI (mostly generating synthetic data to train models). It seems likely that almost all aspects of AI research (chip design, model design, data curation) will follow this trend.
Are there any specific areas in which you would predict “when AGI is achieved, the best results on topic X will have little-to-no influence from AI”?
Well the point of saying “significant AI presence” was “it will have mattered”. I think that includes AI driven R&D. (It also includes things like “are the first AIs plugged into systems they get a lot of opportunity to manipulate from an early stage” and “the first AI is in a more multipolar-ish scenario and doesn’t get decisive strategic advantage.”)
I agree we haven’t seen much AI driven R&D yet (although I think there’s been at least slight coding speedups from pre-o1 copilot, like 5% or 10%, and I think o1 is on track to be fairly significant, and I expect to start seeing more meaningful AI-driven R&D within a year or so).
[edit: Logan’s argument about synthetic data was compelling to me at least at first glance, although I don’t know a ton about it and can imagine learning more and changing my mind again]
I think this is going to keep having terrible consequences for the people (such as the public and more importantly policymakers) being confused, taking a longer time to grasp what is going on, and possibly engaging with the entire situation via wrong, misleading frames.
I think you are dramatically overestimating the general public’s interest in how things are named on random LessWrong blog posts.
Probably true for most of the public, but I bet pretty strongly against “there are zero important/influential people who read LessWrong, or who read papers or essays that mirror LessWrong terminology, who will end up being confused about this in a way that matters.”
Meanwhile, I see almost zero reason to keep using fast/slow – I think it will continue to confuse newcomers to the debate on LessWrong, and I think if you swap in either smooth/sharp or soft/hard there will approximately no confusion. I agree there is some annoyance/friction in remembering to use the new terms, but I don’t think this adds up to much – I think the amount of downstream confusion and wasted time from any given LW post or paper using the term will already outweigh the cost to the author.
(fyi I’m going to try to write some subtantive thoughts on your actual point, and suggest moving this discussion to this post if you are moved to argue more about it)