Why do you think this? Recursive self-improvement isn’t possible yet, so from my perspective it doesn’t seem like we’ve encountered much evidence either way about how fast it might scale.
FWIW I do think we are clearly in a different strategic world than the one I think most people were imagining in 2010. I agree we still have not hit the point where we’re seeing how sharp the RSI curve will be, but, we are clearly seeing that there will be some kind of significant AI presence in the world by the time RSI hits, and it’d be surprising if that didn’t have some kind of strategic implication.
Huh, this doesn’t seem clear to me. It’s tricky to debate what people used to be imagining, especially on topics where those people were talking past each other this much, but my impression was that the fast/discontinuous argument was that rapid, human-mostly-or-entirely-out-of-the-loop recursive self-improvement seemed plausible—not that earlier, non-self-improving systems wouldn’t be useful.
I agree that nobody was making a specific claim that there wouldn’t be any kind of AI driven R&D pre-fast-takeoff. But, I think if Eliezer et al hadn’t been at least implicitly imagining less of this, there would have been at least a bit less talking-past-each-other in the debates with Paul.
I claim the phrasing in your first comment (“significant AI presence”) and your second (“AI driven R&D”) are pretty different—from my perspective, the former doesn’t bear much on this argument, while the latter does. But I think little of the progress so far has resulted from AI-driven R&D?
There is a ton of current AI research that would be impossible without existing AI (mostly generating synthetic data to train models). It seems likely that almost all aspects of AI research (chip design, model design, data curation) will follow this trend.
Are there any specific areas in which you would predict “when AGI is achieved, the best results on topic X will have little-to-no influence from AI”?
Well the point of saying “significant AI presence” was “it will have mattered”. I think that includes AI driven R&D. (It also includes things like “are the first AIs plugged into systems they get a lot of opportunity to manipulate from an early stage” and “the first AI is in a more multipolar-ish scenario and doesn’t get decisive strategic advantage.”)
I agree we haven’t seen much AI driven R&D yet (although I think there’s been at least slight coding speedups from pre-o1 copilot, like 5% or 10%, and I think o1 is on track to be fairly significant, and I expect to start seeing more meaningful AI-driven R&D within a year or so).
[edit: Logan’s argument about synthetic data was compelling to me at least at first glance, although I don’t know a ton about it and can imagine learning more and changing my mind again]
Why do you think this? Recursive self-improvement isn’t possible yet, so from my perspective it doesn’t seem like we’ve encountered much evidence either way about how fast it might scale.
FWIW I do think we are clearly in a different strategic world than the one I think most people were imagining in 2010. I agree we still have not hit the point where we’re seeing how sharp the RSI curve will be, but, we are clearly seeing that there will be some kind of significant AI presence in the world by the time RSI hits, and it’d be surprising if that didn’t have some kind of strategic implication.
Huh, this doesn’t seem clear to me. It’s tricky to debate what people used to be imagining, especially on topics where those people were talking past each other this much, but my impression was that the fast/discontinuous argument was that rapid, human-mostly-or-entirely-out-of-the-loop recursive self-improvement seemed plausible—not that earlier, non-self-improving systems wouldn’t be useful.
I agree that nobody was making a specific claim that there wouldn’t be any kind of AI driven R&D pre-fast-takeoff. But, I think if Eliezer et al hadn’t been at least implicitly imagining less of this, there would have been at least a bit less talking-past-each-other in the debates with Paul.
I claim the phrasing in your first comment (“significant AI presence”) and your second (“AI driven R&D”) are pretty different—from my perspective, the former doesn’t bear much on this argument, while the latter does. But I think little of the progress so far has resulted from AI-driven R&D?
There is a ton of current AI research that would be impossible without existing AI (mostly generating synthetic data to train models). It seems likely that almost all aspects of AI research (chip design, model design, data curation) will follow this trend.
Are there any specific areas in which you would predict “when AGI is achieved, the best results on topic X will have little-to-no influence from AI”?
Well the point of saying “significant AI presence” was “it will have mattered”. I think that includes AI driven R&D. (It also includes things like “are the first AIs plugged into systems they get a lot of opportunity to manipulate from an early stage” and “the first AI is in a more multipolar-ish scenario and doesn’t get decisive strategic advantage.”)
I agree we haven’t seen much AI driven R&D yet (although I think there’s been at least slight coding speedups from pre-o1 copilot, like 5% or 10%, and I think o1 is on track to be fairly significant, and I expect to start seeing more meaningful AI-driven R&D within a year or so).
[edit: Logan’s argument about synthetic data was compelling to me at least at first glance, although I don’t know a ton about it and can imagine learning more and changing my mind again]