Well I’m one of the people who says that “AGI” is the scary thing that doesn’t exist yet (e.g. FAQ or “why I want to move the goalposts on ‘AGI’”). I don’t think “AGI” is a perfect term for the scary thing that doesn’t exist yet, but my current take is that “AGI” is a less bad term compared to alternatives. (I was listing out some other options here.) In particular, I don’t think there’s any terminological option that is sufficiently widely-understood and unambiguous that I wouldn’t need to include a footnote or link explaining exactly what I mean. And if I’m going to do that anyway, doing that with “AGI” seems OK. But I’m open-minded to discussing other options if you (or anyone) have any.
Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything.
I disagree with that—as in “why I want to move the goalposts on ‘AGI’”, I think there’s an especially important category of capability that entails spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time. Mathematicians do this with abstruse mathematical objects, but also trainee accountants do this with spreadsheets, and trainee car mechanics do this with car engines and pliers, and kids do this with toys, and gymnasts do this with their own bodies, etc. I propose that LLMs cannot do things in this category at human level, as of today—e.g. AutoGPT basically doesn’t work, last I heard. And this category of capability isn’t just a random cherrypicked task, but rather central to human capabilities, I claim. (See Section 3.1 here.)
Thanks for your perspective! I think explicitly moving the goal-posts is a reasonable thing to do here, although I would prefer to do this in a way that doesn’t harm the meaning of existing terms.
I mean: I think a lot of people did have some kind of internal “human-level AGI” goalpost which they imagined in a specific way, and modern AI development has resulted in a thing which fits part of that image while not fitting other parts, and it makes a lot of sense to reassess things. Goalpost-moving is usually maligned as an error, but sometimes it actually makes sense.
I prefer ‘transformative AI’ for the scary thing that isn’t here yet. I see where you’re coming from with respect to not wanting to have to explain a new term, but I think ‘AGI’ is probably still more obscure for a general audience than you think it is (see, eg, the snarky complaint here). Of course it depends on your target audience. But ‘transformative AI’ seems relatively self-explanatory as these things go. I see that you have even used that term at times.
I disagree with that—as in “why I want to move the goalposts on ‘AGI’”, I think there’s an especially important category of capability that entails spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time. Mathematicians do this with abstruse mathematical objects, but also trainee accountants do this with spreadsheets, and trainee car mechanics do this with car engines and pliers, and kids do this with toys, and gymnasts do this with their own bodies, etc. I propose that LLMs cannot do things in this category at human level, as of today—e.g. AutoGPT basically doesn’t work, last I heard. And this category of capability isn’t just a random cherrypicked task, but rather central to human capabilities, I claim. (See Section 3.1 here.)
I do think this is gesturing at something important. This feels very similar to the sort of pushback I’ve gotten from other people. Something like: “the fact that AIs can perform well on most easily-measured tasks doesn’t tell us that AIs are on the same level as humans; it tells us that easily-measured tasks are less informative about intelligence than we thought”.
Currently I think LLMs have a small amount of this thing, rather than zero. But my picture of it remains fuzzy.
I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more.
In this case, it is clear to me that there are important senses of the term “general” which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term “general” which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this.
If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements is about semantics (which label we prefer for a shared concept) or substance (which concept is actually appropriate for supporting the inferences we are making).
What are good discourse norms for disambiguation? An intuitively appealing option is to coin new terms for variants of umbrella concepts. This may work in academic settings, but the familiar terms are always going to have a kind of magnetic pull in informal discourse. As such, I think communities like this one should rather strive to define terms wherever possible and approach discussions with a pluralistic stance.
My complaint about “transformative AI” is that (IIUC) its original and universal definition is not about what the algorithm can do but rather how it impacts the world, which is a different topic. For example, the very same algorithm might be TAI if it costs $1/hour but not TAI if it costs $1B/hour, or TAI if it runs at a certain speed but not TAI if it runs many OOM slower, or “not TAI because it’s illegal”. Also, two people can agree about what an algorithm can do but disagree about what its consequences would be on the world, e.g. here’s a blog post claiming that if we have cheap AIs that can do literally everything that a human can do, the result would be “a pluralistic and competitive economy that’s not too different from the one we have now”, which I view as patently absurd.
Anyway, “how an AI algorithm impacts the world” is obviously an important thing to talk about, but “what an AI algorithm can do” is also an important topic, and different, and that’s what I’m asking about, and “TAI” doesn’t seem to fit it as terminology.
Yep, I agree that Transformative AI is about impact on the world rather than capabilities of the system. I think that is the right thing to talk about for things like “AI timelines” if the discussion is mainly about the future of humanity. But, yeah, definitely not always what you want to talk about.
I am having difficulty coming up with a term which points at what you want to point at, so yeah, I see the problem.
I agree with Steve Byrnes here. I think I have a better way to describe this. I would say that the missing piece is ‘mastery’. Specifically, learning mastery over a piece of reality. By mastery I am referring to the skillful ability to model, predict, and purposefully manipulate that subset of reality. I don’t think this is an algorithmic limitation, exactly.
Look at the work Deepmind has been doing, particularly with Gato and more recently AutoRT, SARA-RT, RT-Trajectory, UniSim , and Q-transformer. Look at the work being done with the help of Nvidia’s new Robot Simulation Gym Environment. Look at OpenAI’s recent foray into robotics with Figure AI. This work is held back from being highly impactful (so far) by the difficulty of accurately simulating novel interesting things, the difficulty of learning the pairing of action → consequence compared to learning a static pattern of data, and the hardware difficulties of robotics.
This is what I think our current multimodal frontier models are mostly lacking. They can regurgitate, and to a lesser extent synthesize, facts that humans wrote about, but not develop novel mastery of subjects and then report back on their findings. This is the difference between being able to write a good scientific paper given a dataset of experimental results and rough description of the experiment, versus being able to gather that data yourself. The line here is blurry, and will probably get blurrier before collapsing entirely. It’s about not just doing the experiment, but doing the pilot studies and observations and playing around with the parameters to build a crude initial model about how this particular piece of the universe might work. Building your own new models rather than absorbing models built by others. Moving beyond student to scientist.
This is in large part a limitation of training expense. It’s difficult to have enough on-topic information available in parallel to feed the data-inefficient current algorithms many lifetimes-worth of experience.
So, while it is possible to improve the skill of mastery-of-reality with scaling up current models and training systems, it gets much much easier if the algorithms get more compute-efficient and data-sample-efficient to train.
That is what I think is coming.
I’ve done my own in-depth research into the state of the field of machine learning and potential novel algorithmic advances which have not yet been incorporated into frontier models, and in-depth research into the state of neuroscience’s understanding of the brain. I have written a report detailing the ways in which I think Joe Carlsmith’s and Ajeya Cotra’s estimates are overestimating the AGI-relevant compute of the human brain by somewhere between 10x to 100x.
Furthermore, I think that there are compelling arguments for why the compute in frontier algorithms is not being deployed as efficiently as it could be, resulting in higher training costs and data requirements than is theoretically possible.
In combination, these findings lead me to believe we are primarily algorithm-constrained not hardware or data constrained. Which, in turn, means that once frontier models have progressed to the point of being able to automate research for improved algorithms I expect that substantial progress will follow. This progress will, if I am correct, be untethered to further increases in compute hardware or training data.
My best guess is that a frontier model of the approximate expected capability of GPT-5 or GPT-6 (equivalently Claude 4 or 5, or similar advances in Gemini) will be sufficient for the automation of algorithmic exploration to an extent that the necessary algorithmic breakthroughs will be made. I don’t expect the search process to take more than a year. So I think we should expect a time of algorithmic discovery in the next 2 − 3 years which leads to a strong increase in AGI capabilities even holding compute and data constant.
I expect that ‘mastery of novel pieces of reality’ will continue to lag behind ability to regurgitate and recombine recorded knowledge. Indeed, recombining information clearly seems to be lagging behind regurgitation or creative extrapolation. Not as far behind as mastery, so in some middle range.
If you imagine the whole skillset remaining in its relative configuration of peaks and valleys, but shifted upwards such that the currently lagging ‘mastery’ skill is at human level and a lot of other skills are well beyond, then you will be picturing something similar to what I am picturing.
[Edit:
This is what I mean when I say it isn’t a limit of the algorithm per say. Change the framing of the data, and you change the distribution of the outputs.
I propose that LLMs cannot do things in this category at human level, as of today—e.g. AutoGPT basically doesn’t work, last I heard. And this category of capability isn’t just a random cherrypicked task, but rather central to human capabilities, I claim.
What would you claim is a central example of a task which requires this type of learning? ARA type tasks? Agency tasks? Novel ML research? Do you think these tasks certainly require something qualitatively different than a scaled up version of what we have now (pretraining, in-context learning, RL, maybe training on synthetic domain specific datasets)? If so, why? (Feel free to not answer this or just link me what you’ve written on the topic. I’m more just reacting than making a bid for you to answer these questions here.)
Separately, I think it’s non-obvious that you can’t make human-competitive sample efficient learning happen in many domains where LLMs are already competitive with humans in other non-learning ways by spending massive amounts of compute doing training (with SGD) and synthetic data generation. (See e.g. efficient-zero.) It’s just that the amount of compute/spend is such that you’re just effectively doing a bunch more pretraining and thus it’s not really an interestingly different concept. (See also the discussion here which is mildly relevant.)
In domains where LLMs are much worse than typical humans in non-learning ways, it’s harder to do the comparison, but it’s still non-obvious that the learning speed is worse given massive computational resources and some investment.
I’m talking about the AI’s ability to learn / figure out a new system / idea / domain on the fly. It’s hard to point to a particular “task” that specifically tests this ability (in the way that people normally use the term “task”), because for any possible task, maybe the AI happens to already know how to do it.
You could filter the training data, but doing that in practice might be kinda tricky because “the AI already knows how to do X” is distinct from “the AI has already seen examples of X in the training data”. LLMs “already know how to do” lots of things that are not superficially in the training data, just as humans “already know how to do” lots of things that are superficially unlike anything they’ve seen before—e.g. I can ask a random human to imagine a purple colander falling out of an airplane and answer simple questions about it, and they’ll do it skillfully and instantaneously. That’s the inference algorithm, not the learning algorithm.
Well, getting an AI to invent a new scientific field would work as such a task, because it’s not in the training data by definition. But that’s such a high bar as to be unhelpful in practice. Maybe tasks that we think of as more suited to RL, like low-level robot control, or skillfully playing games that aren’t like anything in the training data?
Separately, I think there are lots of domains where “just generate synthetic data” is not a thing you can do. If an AI doesn’t fully ‘understand’ the physics concept of “superradiance” based on all existing human writing, how would it generate synthetic data to get better? If an AI is making errors in its analysis of the tax code, how would it generate synthetic data to get better? (If you or anyone has a good answer to those questions, maybe you shouldn’t publish them!! :-P )
If an AI doesn’t fully ‘understand’ the physics concept of “superradiance” based on all existing human writing, how would it generate synthetic data to get better?
I think “doesn’t fully understand the concept of superradiance” is a phrase that smuggles in too many assumptions here. If you rephrase it as “can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations” / “makes imprecise predictions in such cases” / “has trouble distinguishing cases where superradiance will occur vs cases where it will not”, all of those suggest pretty obvious ways of generating training data.
GPT-4 can already “figure out a new system on the fly” in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles “build a tool and then use the tool” rather than “augment its own native intelligence”.
Which makes sense. The story of human capabilities advances doesn’t look like “find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals”, it looks like “build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want”.
I feel quite confident that all the leading AI labs are already thinking and talking internally about this stuff, and that what we are saying here adds approximately nothing to their conversations. So I don’t think it matters whether we discuss this or not. That simply isn’t a lever of control we have over the world.
There are potentially secret things people might know which shouldn’t be divulged, but I doubt this conversation is anywhere near technical enough to be advancing the frontier in any way.
I think Steven’s response hits the mark, but from my own perspective, I would say that a not-totally-irrelevant way to measure something related would be: many-shot learning, particularly in cases where few-shot learning does not do the trick.
Yes, this is almost exactly it. I don’t expect frontier LLMs to carry out a complicated, multi-step process and recover from obstacles.
I think of this as the “squirrel bird feeder test”. Squirrels are ingenious and persistent problem solvers, capable of overcoming chains of complex obstacles. LLMs really can’t do this (though Devin is getting closer, if demos are to be believed).
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
However, I agree that LLMs already have superhuman language skills in many areas. They have many, many parts of what’s needed to complete challenges like the above. (On principle, I won’t try to list what I think they’re missing.)
I fear the period between “actual AGI and weak ASI” will be extremely short. And I don’t actually believe there is any long-term way to control ASI.
I fear that most futures lead to a partially-aligned super-human intelligence with its own goals. And any actual control we have will be transitory.
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
I agree that this is a thing current AI systems don’t/can’t do, and that aren’t considered expert-level skills for humans. I disagree that this is a simple test, or the kind of thing a typical human can do without lots of feedback, failures, or assistance. Many very smart humans fail at some or all of these tasks. They give up on starting a business, mess up their taxes, have a hard time navigating bureaucratic red tape, and don’t ever learn to cook. I agree that if an AI could do these things it would be much harder to argue against it being AGI, but it’s important to remember that many healthy, intelligent, adult humans can’t, at least not reliably. Also, remember that most restaurants fail within a couple of years even after making it through all these hoops. The rate is very high even for experienced restauranteurs doing the managing.
I suppose you could argue for a definition of general intelligence that excludes a substantial fraction of humans, but for many reasons I wouldn’t recommend it.
Yeah, the precise ability I’m trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I’m talking about. But as in the restaurant example, not every human could succeed at every possible example.
I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.
Suffice to say, I don’t think GPT-4 is an AGI. But I strongly suspect we’re only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.
One way in which “spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time” could be solved automatically is just by having a truly huge context window. Example of an experiment: teach a particular branch of math to an LLM that has never seen that branch of math.
Maybe humans have just the equivalent of a sort of huge content window spanning selected stuff from their entire lifetimes, and so this kind of learning is possible for them.
I don’t think it is sensible to model humans as “just the equivalent of a sort of huge content window” because this is not a particularly good computational model of how human learning and memory work; but I do think that the technology behind the increasing context size of modern AIs contributes to them having a small but nonzero amount of the thing Steven is pointing at, due to the spontaneous emergence of learning algorithms.
You also have a simple algorithm problem. Humans learn by replacing bad policy with good. Aka a baby replaces “policy that drops objects picked up” ->. “policy that usually results in object retention”.
This is because at a mechanistic level the baby tries many times to pickup and retain objects, and a fixed amount of circuitry in their brain has connections that resulted in a drop down weighted and ones they resulted in retention reinforced.
This means that over time as the baby learns, the compute cost for motor manipulation remains constant. Technically O(1) though thats a bit of a confusing way to express it.
With in context window learning, you can imagine an LLM+ robot recording :
Robotic token string: <string of robotic policy tokens 1> : outcome, drop
Robotic token string: <string of robotic policy tokens 2> : outcome, drop
And so on extending and consuming all of the machines context window, and every time the machine decides which tokens to use next it needs O(n log n) compute to consider all the tokens in the window. (Used to be n^2, this is a huge advance)
This does not scale. You will not get capable or dangerous AI this way. Obviously you need to compress that linear list of outcomes from different strategies to update the underlying network that generated them so it is more likely to output tokens that result in success.
Same for any other task you want the model to do. In context learning scales poorly. This also makes it safe....
Yes. This seems so obviously true to me in way that it is profoundly mysterious to me that almost everybody else seems to disagree. Then again, probably it’s for the best. Maybe this is the one weird timeline where we gmi because everybody thinks we already have AGI.
Well I’m one of the people who says that “AGI” is the scary thing that doesn’t exist yet (e.g. FAQ or “why I want to move the goalposts on ‘AGI’”). I don’t think “AGI” is a perfect term for the scary thing that doesn’t exist yet, but my current take is that “AGI” is a less bad term compared to alternatives. (I was listing out some other options here.) In particular, I don’t think there’s any terminological option that is sufficiently widely-understood and unambiguous that I wouldn’t need to include a footnote or link explaining exactly what I mean. And if I’m going to do that anyway, doing that with “AGI” seems OK. But I’m open-minded to discussing other options if you (or anyone) have any.
I disagree with that—as in “why I want to move the goalposts on ‘AGI’”, I think there’s an especially important category of capability that entails spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time. Mathematicians do this with abstruse mathematical objects, but also trainee accountants do this with spreadsheets, and trainee car mechanics do this with car engines and pliers, and kids do this with toys, and gymnasts do this with their own bodies, etc. I propose that LLMs cannot do things in this category at human level, as of today—e.g. AutoGPT basically doesn’t work, last I heard. And this category of capability isn’t just a random cherrypicked task, but rather central to human capabilities, I claim. (See Section 3.1 here.)
Thanks for your perspective! I think explicitly moving the goal-posts is a reasonable thing to do here, although I would prefer to do this in a way that doesn’t harm the meaning of existing terms.
I mean: I think a lot of people did have some kind of internal “human-level AGI” goalpost which they imagined in a specific way, and modern AI development has resulted in a thing which fits part of that image while not fitting other parts, and it makes a lot of sense to reassess things. Goalpost-moving is usually maligned as an error, but sometimes it actually makes sense.
I prefer ‘transformative AI’ for the scary thing that isn’t here yet. I see where you’re coming from with respect to not wanting to have to explain a new term, but I think ‘AGI’ is probably still more obscure for a general audience than you think it is (see, eg, the snarky complaint here). Of course it depends on your target audience. But ‘transformative AI’ seems relatively self-explanatory as these things go. I see that you have even used that term at times.
I do think this is gesturing at something important. This feels very similar to the sort of pushback I’ve gotten from other people. Something like: “the fact that AIs can perform well on most easily-measured tasks doesn’t tell us that AIs are on the same level as humans; it tells us that easily-measured tasks are less informative about intelligence than we thought”.
Currently I think LLMs have a small amount of this thing, rather than zero. But my picture of it remains fuzzy.
I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more.
In this case, it is clear to me that there are important senses of the term “general” which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term “general” which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this.
If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements is about semantics (which label we prefer for a shared concept) or substance (which concept is actually appropriate for supporting the inferences we are making).
What are good discourse norms for disambiguation? An intuitively appealing option is to coin new terms for variants of umbrella concepts. This may work in academic settings, but the familiar terms are always going to have a kind of magnetic pull in informal discourse. As such, I think communities like this one should rather strive to define terms wherever possible and approach discussions with a pluralistic stance.
My complaint about “transformative AI” is that (IIUC) its original and universal definition is not about what the algorithm can do but rather how it impacts the world, which is a different topic. For example, the very same algorithm might be TAI if it costs $1/hour but not TAI if it costs $1B/hour, or TAI if it runs at a certain speed but not TAI if it runs many OOM slower, or “not TAI because it’s illegal”. Also, two people can agree about what an algorithm can do but disagree about what its consequences would be on the world, e.g. here’s a blog post claiming that if we have cheap AIs that can do literally everything that a human can do, the result would be “a pluralistic and competitive economy that’s not too different from the one we have now”, which I view as patently absurd.
Anyway, “how an AI algorithm impacts the world” is obviously an important thing to talk about, but “what an AI algorithm can do” is also an important topic, and different, and that’s what I’m asking about, and “TAI” doesn’t seem to fit it as terminology.
Yep, I agree that Transformative AI is about impact on the world rather than capabilities of the system. I think that is the right thing to talk about for things like “AI timelines” if the discussion is mainly about the future of humanity. But, yeah, definitely not always what you want to talk about.
I am having difficulty coming up with a term which points at what you want to point at, so yeah, I see the problem.
I agree with Steve Byrnes here. I think I have a better way to describe this.
I would say that the missing piece is ‘mastery’. Specifically, learning mastery over a piece of reality. By mastery I am referring to the skillful ability to model, predict, and purposefully manipulate that subset of reality.
I don’t think this is an algorithmic limitation, exactly.
Look at the work Deepmind has been doing, particularly with Gato and more recently AutoRT, SARA-RT, RT-Trajectory, UniSim , and Q-transformer. Look at the work being done with the help of Nvidia’s new Robot Simulation Gym Environment. Look at OpenAI’s recent foray into robotics with Figure AI. This work is held back from being highly impactful (so far) by the difficulty of accurately simulating novel interesting things, the difficulty of learning the pairing of action → consequence compared to learning a static pattern of data, and the hardware difficulties of robotics.
This is what I think our current multimodal frontier models are mostly lacking. They can regurgitate, and to a lesser extent synthesize, facts that humans wrote about, but not develop novel mastery of subjects and then report back on their findings. This is the difference between being able to write a good scientific paper given a dataset of experimental results and rough description of the experiment, versus being able to gather that data yourself. The line here is blurry, and will probably get blurrier before collapsing entirely. It’s about not just doing the experiment, but doing the pilot studies and observations and playing around with the parameters to build a crude initial model about how this particular piece of the universe might work. Building your own new models rather than absorbing models built by others. Moving beyond student to scientist.
This is in large part a limitation of training expense. It’s difficult to have enough on-topic information available in parallel to feed the data-inefficient current algorithms many lifetimes-worth of experience.
So, while it is possible to improve the skill of mastery-of-reality with scaling up current models and training systems, it gets much much easier if the algorithms get more compute-efficient and data-sample-efficient to train.
That is what I think is coming.
I’ve done my own in-depth research into the state of the field of machine learning and potential novel algorithmic advances which have not yet been incorporated into frontier models, and in-depth research into the state of neuroscience’s understanding of the brain. I have written a report detailing the ways in which I think Joe Carlsmith’s and Ajeya Cotra’s estimates are overestimating the AGI-relevant compute of the human brain by somewhere between 10x to 100x.
Furthermore, I think that there are compelling arguments for why the compute in frontier algorithms is not being deployed as efficiently as it could be, resulting in higher training costs and data requirements than is theoretically possible.
In combination, these findings lead me to believe we are primarily algorithm-constrained not hardware or data constrained. Which, in turn, means that once frontier models have progressed to the point of being able to automate research for improved algorithms I expect that substantial progress will follow. This progress will, if I am correct, be untethered to further increases in compute hardware or training data.
My best guess is that a frontier model of the approximate expected capability of GPT-5 or GPT-6 (equivalently Claude 4 or 5, or similar advances in Gemini) will be sufficient for the automation of algorithmic exploration to an extent that the necessary algorithmic breakthroughs will be made. I don’t expect the search process to take more than a year. So I think we should expect a time of algorithmic discovery in the next 2 − 3 years which leads to a strong increase in AGI capabilities even holding compute and data constant.
I expect that ‘mastery of novel pieces of reality’ will continue to lag behind ability to regurgitate and recombine recorded knowledge. Indeed, recombining information clearly seems to be lagging behind regurgitation or creative extrapolation. Not as far behind as mastery, so in some middle range.
If you imagine the whole skillset remaining in its relative configuration of peaks and valleys, but shifted upwards such that the currently lagging ‘mastery’ skill is at human level and a lot of other skills are well beyond, then you will be picturing something similar to what I am picturing.
[Edit:
This is what I mean when I say it isn’t a limit of the algorithm per say. Change the framing of the data, and you change the distribution of the outputs.
]
From what I understand I would describe the skill Steven points to as “autonomously and persistently learning at deploy time”.
How would you feel about calling systems that posess this ability “self-refining intelligences”?
I think mastery, as Nathan comments above, is a potential outcome of employing this ability rather than the skill/ability itself.
What would you claim is a central example of a task which requires this type of learning? ARA type tasks? Agency tasks? Novel ML research? Do you think these tasks certainly require something qualitatively different than a scaled up version of what we have now (pretraining, in-context learning, RL, maybe training on synthetic domain specific datasets)? If so, why? (Feel free to not answer this or just link me what you’ve written on the topic. I’m more just reacting than making a bid for you to answer these questions here.)
Separately, I think it’s non-obvious that you can’t make human-competitive sample efficient learning happen in many domains where LLMs are already competitive with humans in other non-learning ways by spending massive amounts of compute doing training (with SGD) and synthetic data generation. (See e.g. efficient-zero.) It’s just that the amount of compute/spend is such that you’re just effectively doing a bunch more pretraining and thus it’s not really an interestingly different concept. (See also the discussion here which is mildly relevant.)
In domains where LLMs are much worse than typical humans in non-learning ways, it’s harder to do the comparison, but it’s still non-obvious that the learning speed is worse given massive computational resources and some investment.
I’m talking about the AI’s ability to learn / figure out a new system / idea / domain on the fly. It’s hard to point to a particular “task” that specifically tests this ability (in the way that people normally use the term “task”), because for any possible task, maybe the AI happens to already know how to do it.
You could filter the training data, but doing that in practice might be kinda tricky because “the AI already knows how to do X” is distinct from “the AI has already seen examples of X in the training data”. LLMs “already know how to do” lots of things that are not superficially in the training data, just as humans “already know how to do” lots of things that are superficially unlike anything they’ve seen before—e.g. I can ask a random human to imagine a purple colander falling out of an airplane and answer simple questions about it, and they’ll do it skillfully and instantaneously. That’s the inference algorithm, not the learning algorithm.
Well, getting an AI to invent a new scientific field would work as such a task, because it’s not in the training data by definition. But that’s such a high bar as to be unhelpful in practice. Maybe tasks that we think of as more suited to RL, like low-level robot control, or skillfully playing games that aren’t like anything in the training data?
Separately, I think there are lots of domains where “just generate synthetic data” is not a thing you can do. If an AI doesn’t fully ‘understand’ the physics concept of “superradiance” based on all existing human writing, how would it generate synthetic data to get better? If an AI is making errors in its analysis of the tax code, how would it generate synthetic data to get better? (If you or anyone has a good answer to those questions, maybe you shouldn’t publish them!! :-P )
I think “doesn’t fully understand the concept of superradiance” is a phrase that smuggles in too many assumptions here. If you rephrase it as “can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations” / “makes imprecise predictions in such cases” / “has trouble distinguishing cases where superradiance will occur vs cases where it will not”, all of those suggest pretty obvious ways of generating training data.
GPT-4 can already “figure out a new system on the fly” in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles “build a tool and then use the tool” rather than “augment its own native intelligence”.
Which makes sense. The story of human capabilities advances doesn’t look like “find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals”, it looks like “build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want”.
I don’t know how I feel about pushing this conversation further. A lot of people read this forum now.
I feel quite confident that all the leading AI labs are already thinking and talking internally about this stuff, and that what we are saying here adds approximately nothing to their conversations. So I don’t think it matters whether we discuss this or not. That simply isn’t a lever of control we have over the world.
There are potentially secret things people might know which shouldn’t be divulged, but I doubt this conversation is anywhere near technical enough to be advancing the frontier in any way.
Perhaps.
I think Steven’s response hits the mark, but from my own perspective, I would say that a not-totally-irrelevant way to measure something related would be: many-shot learning, particularly in cases where few-shot learning does not do the trick.
Yes, this is almost exactly it. I don’t expect frontier LLMs to carry out a complicated, multi-step process and recover from obstacles.
I think of this as the “squirrel bird feeder test”. Squirrels are ingenious and persistent problem solvers, capable of overcoming chains of complex obstacles. LLMs really can’t do this (though Devin is getting closer, if demos are to be believed).
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
However, I agree that LLMs already have superhuman language skills in many areas. They have many, many parts of what’s needed to complete challenges like the above. (On principle, I won’t try to list what I think they’re missing.)
I fear the period between “actual AGI and weak ASI” will be extremely short. And I don’t actually believe there is any long-term way to control ASI.
I fear that most futures lead to a partially-aligned super-human intelligence with its own goals. And any actual control we have will be transitory.
I agree that this is a thing current AI systems don’t/can’t do, and that aren’t considered expert-level skills for humans. I disagree that this is a simple test, or the kind of thing a typical human can do without lots of feedback, failures, or assistance. Many very smart humans fail at some or all of these tasks. They give up on starting a business, mess up their taxes, have a hard time navigating bureaucratic red tape, and don’t ever learn to cook. I agree that if an AI could do these things it would be much harder to argue against it being AGI, but it’s important to remember that many healthy, intelligent, adult humans can’t, at least not reliably. Also, remember that most restaurants fail within a couple of years even after making it through all these hoops. The rate is very high even for experienced restauranteurs doing the managing.
I suppose you could argue for a definition of general intelligence that excludes a substantial fraction of humans, but for many reasons I wouldn’t recommend it.
Yeah, the precise ability I’m trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I’m talking about. But as in the restaurant example, not every human could succeed at every possible example.
I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.
Suffice to say, I don’t think GPT-4 is an AGI. But I strongly suspect we’re only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.
Got it, makes sense, agreed.
One way in which “spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time” could be solved automatically is just by having a truly huge context window. Example of an experiment: teach a particular branch of math to an LLM that has never seen that branch of math.
Maybe humans have just the equivalent of a sort of huge content window spanning selected stuff from their entire lifetimes, and so this kind of learning is possible for them.
I don’t think it is sensible to model humans as “just the equivalent of a sort of huge content window” because this is not a particularly good computational model of how human learning and memory work; but I do think that the technology behind the increasing context size of modern AIs contributes to them having a small but nonzero amount of the thing Steven is pointing at, due to the spontaneous emergence of learning algorithms.
You also have a simple algorithm problem. Humans learn by replacing bad policy with good. Aka a baby replaces “policy that drops objects picked up” ->. “policy that usually results in object retention”.
This is because at a mechanistic level the baby tries many times to pickup and retain objects, and a fixed amount of circuitry in their brain has connections that resulted in a drop down weighted and ones they resulted in retention reinforced.
This means that over time as the baby learns, the compute cost for motor manipulation remains constant. Technically O(1) though thats a bit of a confusing way to express it.
With in context window learning, you can imagine an LLM+ robot recording :
Robotic token string: <string of robotic policy tokens 1> : outcome, drop
Robotic token string: <string of robotic policy tokens 2> : outcome, retain
Robotic token string: <string of robotic policy tokens 2> : outcome, drop
And so on extending and consuming all of the machines context window, and every time the machine decides which tokens to use next it needs O(n log n) compute to consider all the tokens in the window. (Used to be n^2, this is a huge advance)
This does not scale. You will not get capable or dangerous AI this way. Obviously you need to compress that linear list of outcomes from different strategies to update the underlying network that generated them so it is more likely to output tokens that result in success.
Same for any other task you want the model to do. In context learning scales poorly. This also makes it safe....
Yes. This seems so obviously true to me in way that it is profoundly mysterious to me that almost everybody else seems to disagree. Then again, probably it’s for the best. Maybe this is the one weird timeline where we gmi because everybody thinks we already have AGI.