Recursive self-improvement in AI probably comes before AGI. Evolution doesn’t need to understand human minds to build them, and a parent doesn’t need to be an AI researcher to make a child. The bitter lesson and the practice of recent years suggest that building increasingly capable AIs doesn’t depend on understanding how they think.
Thus the least capable AI that can build superintelligence without human input only needs to be a competent engineer that can scale and refine a sufficiently efficient AI design, in an empirically driven mundane way that doesn’t depend on matching capabilities of Grothendieck for conceptual invention. This makes the threshold of AGI less relevant for timelines of recursive self-improvement than I previously expected. With o1 and what straightforwardly follows, we plausibly already have all it takes to get recursive self-improvement, if the current designs get there with the next few years of scaling, and the resulting AIs are merely competent engineers that fail to match humans at less legible technical skills.
I think you’re doing a “we just need X” with recursive self-improvement. The improvement may be iterable and self-applicable… but is it general? Is it on a bounded trajectory or an unbounded trajectory? Very different outcomes.
Technically this probably isn’t recursive self improvement, but rather automated AI progress. This is relevant mostly because
It implies that, at least through the early parts of the takeoff, there will be a lot of individual AI agents doing locally-useful compute-efficiency and improvement-on-relevant-benchmarks things, rather than one single coherent agent following a global plan for configuring the matter in the universe in a way that maximizes some particular internally-represented utility function.
It means that multi-agent dynamics will be very relevant in how things happen
If your threat model is “no group of humans manages to gain control of the future before human irrelevance”, none of this probably matters.
No group of AIs needs to gain control before human irrelevance either. Like a runaway algal bloom AIs might be able to bootstrap superintelligence, without crossing the threshold of AGI being useful in helping them gain control over this process any more than humans maintain such control at the outset. So it’s not even multi-agent dynamics shaping the outcome, capitalism might just serve as the nutrients until a much higher threshold of capability where a superintelligence can finally take control of this process.
Cutting edge AI research is one of the most difficult tasks humans are currently working on, so the intelligence requirement to replace human researchers is quite high. It is likely that most ordinary software development, being easier, will be automated before AI research is automated. I’m unsure whether LLMs with long chains of thought (o1-like models) can reach this level of intelligence before human researchers invent a more general AI architecture.
Humans are capable of solving conceptually difficult problems, so they do. An easier path might be possible that doesn’t depend on such capabilities, and doesn’t stall for their lack, like evolution doesn’t stall for lack of any mind at all. If there is more potential for making models smarter alien tigers by scaling RL in o1-like post-training, and the scaling proceeds to 1 gigawatt and then 35 gigawatt training systems, it might well be sufficient to get an engineer AI that can improve such systems further, at 400x and then 10,000x the compute of GPT-4.
Before o1, there was a significant gap, the mysterious absence of System 2 capabilities, with only vague expectation that they might emerge or become easier to elicit from scaled up base models. This uncertainty no longer gates engineering capabilities of AIs. I’m still unsure that scaling directly can make AIs capabile of novel conceptual thought, but AIs becoming able to experimentally iterate on AI designs seems likely, and that in turn seems sufficient to eventually mutate these designs towards remaining missing capabilities.
(It’s useful to frame most ideas as exploratory engineering rather than forecasting. The question of whether something can happen, or can be done, doesn’t need to be contextualized within the question of whether it will happen or will be done. Physical experiments are done under highly contrived conditions, and similarly we can conduct thought experiments or conceptual arguments under fantastical or even physically impossible conditions. Thus I think Carl Shulman’s human levelAGI world is a valid exploration of the future of AI, even though I don’t believe that most of what he describes happens in actuality before superintelligence changes the premise. It serves as a strong argument for industrial and economic growth driven by AGI, even though it almost entirely consists of describing events that can’t possibly happen.)
Cutting edge AI research seems remarkably and surprisingly easy compared to other forms of cutting edge science. Most things work on the first try, clever insights aren’t required, it’s mostly an engineering task of scaling compute.
This seems like the sort of R&D that China is good at: research that doesn’t need superstar researchers and that is mostly made of incremental improvements. But yet they don’t seem to be producing top LLMs. Why is that?
China is producing research in a number of areas right now that is surpassing the West and arguably more impressive scientifically than producing top LLMs.
A big reason China is lagging a little bit might be political interference at major tech companies. Xi Jinping instigated a major crackdown recently.
There is also significantly less Chinese text data. I am not a China or tech expert so these sre just guesses.
In any case, I wouldn’t assign it to much significance. The AI space is just moving so quickly that even a minor year delay can seem like lightyears. But that doesnt mean that Chinese companies cant so it or that a country-continent with 1,4 billion people and a history of many technological firsts cant scale up a transformer.
Yi-Lightning (01 AI) Chatbot Arena results are suprisingly strong for its price, which puts it at about 10B active parameters[1]. It’s above Claude 3.5 Sonnet and GPT-4o in Math, above Gemini 1.5 Pro 002 in English and Hard Prompts (English). It’s above all non-frontier models in Coding and Hard Prompts (both with Style Control), including Qwen-2.5-72B (trained on 18T tokens). Interesting if this is mostly a better methodology or compute scaling getting taken more seriously for a tiny model.
The developer’s site says it’s a MoE model. Developer’s API docs list it at ¥0.99/1M tokens. The currency must be Renminbi, so that’s about $0.14. Together serves Llama-3-8B for $0.10-0.18 (per million tokens), Qwen-2.5-7B for $0.30, all MoE models up to 56B total (not active) parameters for $0.60. (The prices for open weights models won’t have significant margins, and model size is known, unlike with lightweight closed models.)
Yi-Lightning is a small MOE model that is extremely fast and inexpensive. Yi-Lightning costs only $0.14 (RMB0.99 ) /mil tokens [...] Yi-Lightning was pre-trained on 2000 H100s for 1 month, costing about $3 million, a tiny fraction of Grok-2.
Assuming it’s trained in BF16 with 40% compute utilization, that’s a 2e24 FLOPs model (Llama-3-70B is about 6e24 FLOPs, but it’s not MoE, so the FLOPs are not used as well). Assuming from per token price that it has 10-20B active parameters, it’s trained on 15-30T tokens. So not an exercise in extreme compute scaling, just excellent execution.
Recursive self-improvement in AI probably comes before AGI. Evolution doesn’t need to understand human minds to build them, and a parent doesn’t need to be an AI researcher to make a child. The bitter lesson and the practice of recent years suggest that building increasingly capable AIs doesn’t depend on understanding how they think.
Thus the least capable AI that can build superintelligence without human input only needs to be a competent engineer that can scale and refine a sufficiently efficient AI design, in an empirically driven mundane way that doesn’t depend on matching capabilities of Grothendieck for conceptual invention. This makes the threshold of AGI less relevant for timelines of recursive self-improvement than I previously expected. With o1 and what straightforwardly follows, we plausibly already have all it takes to get recursive self-improvement, if the current designs get there with the next few years of scaling, and the resulting AIs are merely competent engineers that fail to match humans at less legible technical skills.
The bitter lesson says that there are many things you don’t need to understand, but it doesn’t say you don’t need to understand anything.
I think you’re doing a “we just need X” with recursive self-improvement. The improvement may be iterable and self-applicable… but is it general? Is it on a bounded trajectory or an unbounded trajectory? Very different outcomes.
Technically this probably isn’t recursive self improvement, but rather automated AI progress. This is relevant mostly because
It implies that, at least through the early parts of the takeoff, there will be a lot of individual AI agents doing locally-useful compute-efficiency and improvement-on-relevant-benchmarks things, rather than one single coherent agent following a global plan for configuring the matter in the universe in a way that maximizes some particular internally-represented utility function.
It means that multi-agent dynamics will be very relevant in how things happen
If your threat model is “no group of humans manages to gain control of the future before human irrelevance”, none of this probably matters.
No group of AIs needs to gain control before human irrelevance either. Like a runaway algal bloom AIs might be able to bootstrap superintelligence, without crossing the threshold of AGI being useful in helping them gain control over this process any more than humans maintain such control at the outset. So it’s not even multi-agent dynamics shaping the outcome, capitalism might just serve as the nutrients until a much higher threshold of capability where a superintelligence can finally take control of this process.
Cutting edge AI research is one of the most difficult tasks humans are currently working on, so the intelligence requirement to replace human researchers is quite high. It is likely that most ordinary software development, being easier, will be automated before AI research is automated. I’m unsure whether LLMs with long chains of thought (o1-like models) can reach this level of intelligence before human researchers invent a more general AI architecture.
Humans are capable of solving conceptually difficult problems, so they do. An easier path might be possible that doesn’t depend on such capabilities, and doesn’t stall for their lack, like evolution doesn’t stall for lack of any mind at all. If there is more potential for making models smarter alien tigers by scaling RL in o1-like post-training, and the scaling proceeds to 1 gigawatt and then 35 gigawatt training systems, it might well be sufficient to get an engineer AI that can improve such systems further, at 400x and then 10,000x the compute of GPT-4.
Before o1, there was a significant gap, the mysterious absence of System 2 capabilities, with only vague expectation that they might emerge or become easier to elicit from scaled up base models. This uncertainty no longer gates engineering capabilities of AIs. I’m still unsure that scaling directly can make AIs capabile of novel conceptual thought, but AIs becoming able to experimentally iterate on AI designs seems likely, and that in turn seems sufficient to eventually mutate these designs towards remaining missing capabilities.
(It’s useful to frame most ideas as exploratory engineering rather than forecasting. The question of whether something can happen, or can be done, doesn’t need to be contextualized within the question of whether it will happen or will be done. Physical experiments are done under highly contrived conditions, and similarly we can conduct thought experiments or conceptual arguments under fantastical or even physically impossible conditions. Thus I think Carl Shulman’s human level AGI world is a valid exploration of the future of AI, even though I don’t believe that most of what he describes happens in actuality before superintelligence changes the premise. It serves as a strong argument for industrial and economic growth driven by AGI, even though it almost entirely consists of describing events that can’t possibly happen.)
Cutting edge AI research seems remarkably and surprisingly easy compared to other forms of cutting edge science. Most things work on the first try, clever insights aren’t required, it’s mostly an engineering task of scaling compute.
This seems like the sort of R&D that China is good at: research that doesn’t need superstar researchers and that is mostly made of incremental improvements. But yet they don’t seem to be producing top LLMs. Why is that?
China is producing research in a number of areas right now that is surpassing the West and arguably more impressive scientifically than producing top LLMs.
A big reason China is lagging a little bit might be political interference at major tech companies. Xi Jinping instigated a major crackdown recently. There is also significantly less Chinese text data. I am not a China or tech expert so these sre just guesses.
In any case, I wouldn’t assign it to much significance. The AI space is just moving so quickly that even a minor year delay can seem like lightyears. But that doesnt mean that Chinese companies cant so it or that a country-continent with 1,4 billion people and a history of many technological firsts cant scale up a transformer.
@gwern
Yi-Lightning (01 AI) Chatbot Arena results are suprisingly strong for its price, which puts it at about 10B active parameters[1]. It’s above Claude 3.5 Sonnet and GPT-4o in Math, above Gemini 1.5 Pro 002 in English and Hard Prompts (English). It’s above all non-frontier models in Coding and Hard Prompts (both with Style Control), including Qwen-2.5-72B (trained on 18T tokens). Interesting if this is mostly a better methodology or compute scaling getting taken more seriously for a tiny model.
The developer’s site says it’s a MoE model. Developer’s API docs list it at ¥0.99/1M tokens. The currency must be Renminbi, so that’s about $0.14. Together serves Llama-3-8B for $0.10-0.18 (per million tokens), Qwen-2.5-7B for $0.30, all MoE models up to 56B total (not active) parameters for $0.60. (The prices for open weights models won’t have significant margins, and model size is known, unlike with lightweight closed models.)
Kai-Fu Lee, CEO of 01 AI, posted on LinkedIn:
Assuming it’s trained in BF16 with 40% compute utilization, that’s a 2e24 FLOPs model (Llama-3-70B is about 6e24 FLOPs, but it’s not MoE, so the FLOPs are not used as well). Assuming from per token price that it has 10-20B active parameters, it’s trained on 15-30T tokens. So not an exercise in extreme compute scaling, just excellent execution.