Indeed the leadership of all the major AI corporations are actively excited about, and gunning for, an intelligence explosion. They are integrating AI into their AI R&D as fast as they can.
It’s hard to know for sure what they are planning in secret. If I were them, I’d currently be a mode of “biding my time, waiting for the optimal moment to focus on automating AI R&D, building up the prerequisites.”
I think the current LLMs and other AI systems are not quite strong enough to pass a critical threshold where this RSI feedback loop could really take off. Thus, if I had the option to invest in preparing the scaffolding now, or racing to get to the first version so that I got to be the first to start doing RSI… I’d just push hard for that first good-enough version. Then I’d pivot hard to RSI as soon as I had it.
I don’t know, intuitively it would seem suboptimal to put very little of the research portfolio on preparing the scaffolding, since somebody else who isn’t that far behind on the base model (e.g. another lab, maybe even the opensource community) might figure out the scaffolding (and perhaps not even make anything public) and get ahead overall.
Can you expand on this? My rough impression (without having any inside knowledge) is that auto AI R&D is probably very much underelicited, including e.g. in this recent OpenAI auto ML evals paper; which might suggest they’re not gunning for it as hard as they could?
It’s hard to know for sure what they are planning in secret. If I were them, I’d currently be a mode of “biding my time, waiting for the optimal moment to focus on automating AI R&D, building up the prerequisites.”
I think the current LLMs and other AI systems are not quite strong enough to pass a critical threshold where this RSI feedback loop could really take off. Thus, if I had the option to invest in preparing the scaffolding now, or racing to get to the first version so that I got to be the first to start doing RSI… I’d just push hard for that first good-enough version. Then I’d pivot hard to RSI as soon as I had it.
I don’t know, intuitively it would seem suboptimal to put very little of the research portfolio on preparing the scaffolding, since somebody else who isn’t that far behind on the base model (e.g. another lab, maybe even the opensource community) might figure out the scaffolding (and perhaps not even make anything public) and get ahead overall.
Maybe. I think it’s hard to say from an outside perspective. I expect that what’s being done inside labs is not always obvious on the outside.
And isn’t o1/strawberry something pointing in the direction of RSI, such that it implies that thought and effort is being put into that direction?