Thanks for this thoughtful and detailed and object-level critique! Just the sort of discussion I hope to inspire. Strong-upvoted.
Here are my point-by-point replies:
Of course there are workarounds for each of these issues, such as RAG for long-term memory, and multi-prompt approaches (chain-of-thought, tree-of-thought, AutoGPT, etc.) for exploratory work processes. But I see no reason to believe that they will work sufficiently well to tackle a week-long project. Briefly, my intuitive argument is that these are old school, rigid, GOFAI, Software 1.0 sorts of approaches, the sort of thing that tends to not work out very well in messy real-world situations. Many people have observed that even in the era of GPT-4, there is a conspicuous lack of LLMs accomplishing any really meaty creative work; I think these missing capabilities lie at the heart of the problem.
I agree that if no progress is made on long-term memory and iterative/exploratory work processes, we won’t have AGI. My position is that we are already seeing significant progress in these dimensions and that we will see more significant progress in the next 1-3 years. (If 4 years from now we haven’t seen such progress I’ll admit I was totally wrong about something). Maybe part of the disagreement between us is that the stuff you think are mere hacky workarounds, I think might work sufficiently well (with a few years of tinkering and experimentation perhaps).
Wanna make some predictions we could bet on? Some AI capability I expect to see in the next 3 years that you expect to not see?
Coding, in the sense that GPT4 can do it, is nowhere near the top of the hierarchy of skills involved in serious software engineering. And so I believe this is a bit like saying that, because a certain robot is already pretty decent at chiseling, it will soon be able to produce works of art at the same level as any human sculptor.
I think I just don’t buy this. I work at OpenAI R&D. I see how the sausage gets made. I’m not saying the whole sausage is coding, I’m saying a significant part of it is, and moreover that many of the bits GPT4 currently can’t do seem to me that they’ll be doable in the next few years.
If the delay in real-world economic value were due to “schlep”, shouldn’t we already see one-off demonstrations of LLMs performing economically-valuable-caliber tasks in the lab? For instance, regarding software engineering, maybe it takes a long time to create a packaged product that can be deployed in the field, absorb the context of a legacy codebase, etc. and perform useful high-level work. But if that’s the only problem, shouldn’t there already be at least one demonstration of an LLM doing some meaty software engineering project in a friendly lab environment somewhere? More generally, how do we define “schlep” such that the need for schlep explains the lack of visible accomplishments today, but also allows for AI systems be able to replace 99% of remote jobs within just four years?
To be clear, I do NOT think that today’s systems could replace 99% of remote jobs even with a century of schlep. And in particular I don’t think they are capable of massively automating AI R&D even with a century of schlep. I just think they could be producing, say, at least an OOM more economic value. My analogy here is to the internet; my understanding is that there were a bunch of apps that are super big now (amazon? tinder? twitter?) that were technically feasible on the hardware of 2000, but which didn’t just spring into the world fully formed in 2000 -- instead it took time for startups to form, ideas to be built and tested, markets to be disrupted, etc.
I define schlep the same way you do, I think.
What I predict will happen is basically described in the scenario I gave in the OP, though I think it’ll probably take slightly longer than that. I don’t want to say much detail I’m afraid because it might give the impression that I’m leaking OAI secrets (even though, to be clear, I’ve had these views since before I joined OAI)
I think when you try to use the systems in practical situations; they might lose coherence over long chains of thought, or be unable to effectively debug non-performant complex code, or not be able to have as good intuitions about which research directions would be promising, et cetera.
This was a nice answer from Ege. My follow up questions would be: Why? I have theories about what coherence is and why current models often lose it over long chains of thought (spoiler: they weren’t trained to have trains of thought) and theories about why they aren’t already excellent complex-code-debuggers (spoiler: they weren’t trained to be) etc. What’s your theory for why all the things AI labs will try between now and 2030 to make AIs good at these things will fail? Base models (gpt-3, gpt-4, etc.) aren’t out-of-the-box good at being helpful harmless chatbots or useful coding assistants. But with a bunch of tinkering and RLHF, they became good, and now they are used in the real world by a hundred million people a day. Again though I don’t want to get into details. I understand you might be skeptical that it can be done but I encourage you to red-team your position, and ask yourself ‘how would I do it, if I were an AI lab hell-bent on winning the AGI race?’ You might be able to think of some things. And if you can’t, I’d love to hear your thoughts on why it’s not possible. You might be right.
I realize you’re not explicitly labeling this as a prediction, but… isn’t this precisely the sort of thought process to which Hofstadter’s Law applies?
Indeed. Like I said, my timelines are based on a portfolio of different models/worlds; the very short-timelines models/worlds are basically like “look we basically already have the ingredients, we just need to assemble them, here is how to do it...” and the planning fallacy / hofstadter’s law 100% applies to this. The 5-year-and-beyond worlds are not like that; they are more like extrapolating trends and saying “sure looks like by 2030 we’ll have AIs that are superhuman at X, Y, Z, … heck all of our current benchmarks. And because of the way generalization/transfer/etc. and ML works they’ll probably also be broadly capable at stuff, not just narrowly good at these benchmarks. Hmmm. Seems like that could be AGI.” Note the absence of a plan here, I’m just looking at lines on graphs and then extrapolating them and then trying to visualize what the absurdly high values on those graphs mean for fuzzier stuff that isn’t being measured yet.
So my timelines do indeed take into account Hofstadter’s Law. If I wasn’t accounting for it already, my median would be lower than 2027. However, I am open to the criticism that maybe I am not accounting for it enough. However I am NOT open to the criticism that I should e.g. add 10 years to my timelines because of this. For reasons just explained. It’s a sort of “double or triple how long you think it’ll take to complete the plan” sort of thing, not a “10x how long you think it’ll take to complete the plan” sort of thing, and even if it was, then I’d just ditch the plan and look at the graphs.
Likewise, thanks for the thoughtful and detailed response. (And I hope you aren’t too impacted by current events...)
I agree that if no progress is made on long-term memory and iterative/exploratory work processes, we won’t have AGI. My position is that we are already seeing significant progress in these dimensions and that we will see more significant progress in the next 1-3 years. (If 4 years from now we haven’t seen such progress I’ll admit I was totally wrong about something). Maybe part of the disagreement between us is that the stuff you think are mere hacky workarounds, I think might work sufficiently well (with a few years of tinkering and experimentation perhaps).
Wanna make some predictions we could bet on? Some AI capability I expect to see in the next 3 years that you expect to not see?
Sure, that’d be fun, and seems like about the only reasonable next step on this branch of the conversation. Setting good prediction targets is difficult, and as it happens I just blogged about this. Off the top of my head, predictions could be around the ability of a coding AI to work independently over an extended period of time (at which point, it is arguably an “engineering AI”). Two different ways of framing it:
An AI coding assistant can independently complete 80% of real-world tasks that would take X amount of time for a reasonably skilled engineer who is already familiar with the general subject matter and the project/codebase to which the task applies.
An AI coding assistant can usefully operate independently for X amount of time, i.e. it is often productive to assign it a task and allow it to process for X time before checking in on it.
At first glance, (1) strikes me as a better, less-ambiguous framing. Of course it becomes dramatically more or less ambitious depending on X, also the 80% could be tweaked but I think this is less interesting (low percentages allow for a fluky, unreliable AI to pass the test; very high percentages seem likely to require superhuman performance in a way that is not relevant to what we’re trying to measure here).
It would be nice to have some prediction targets that more directly get at long-term memory and iterative/exploratory work processes, but as I discuss in the blog post, I don’t know how to construct such a target – open to suggestions.
Coding, in the sense that GPT4 can do it, is nowhere near the top of the hierarchy of skills involved in serious software engineering. And so I believe this is a bit like saying that, because a certain robot is already pretty decent at chiseling, it will soon be able to produce works of art at the same level as any human sculptor.
I think I just don’t buy this. I work at OpenAI R&D. I see how the sausage gets made. I’m not saying the whole sausage is coding, I’m saying a significant part of it is, and moreover that many of the bits GPT4 currently can’t do seem to me that they’ll be doable in the next few years.
Intuitively, I struggle with this, but you have inside data and I do not. Maybe we just set this point aside for now, we have plenty of other points we can discuss.
To be clear, I do NOT think that today’s systems could replace 99% of remote jobs even with a century of schlep. And in particular I don’t think they are capable of massively automating AI R&D even with a century of schlep. I just think they could be producing, say, at least an OOM more economic value. …
This, I would agree with. And on re-reading, I think I may have been mixed up as to what you and Ajeya were saying in the section I was quoting from here, so I’ll drop this.
[Ege] I think when you try to use the systems in practical situations; they might lose coherence over long chains of thought, or be unable to effectively debug non-performant complex code, or not be able to have as good intuitions about which research directions would be promising, et cetera.
This was a nice answer from Ege. My follow up questions would be: Why? I have theories about what coherence is and why current models often lose it over long chains of thought (spoiler: they weren’t trained to have trains of thought) and theories about why they aren’t already excellent complex-code-debuggers (spoiler: they weren’t trained to be) etc. What’s your theory for why all the things AI labs will try between now and 2030 to make AIs good at these things will fail?
I would not confidently argue that it won’t happen by 2030; I am suggesting that these problems are unlikely to be well solved in a usable-in-the-field form by 2027 (four years from now). My thinking:
The rapid progress in LLM capabilities has been substantially fueled by the availability of stupendous amounts of training data.
There is no similar abundance of low-hanging training data for extended (day/week/more) chains of thought, nor for complex debugging tasks. Hence, it will not be easy to extend LLMs (and/or train some non-LLM model) to high performance at these tasks.
A lot of energy will go into the attempt, which will eventually succeed. But per (2), I think some new techniques will be needed, which will take time to identify, refine, scale, and productize; a heavy lift in four years. (Basically: Hofstadter’s Law.)
Especially because I wouldn’t be surprised if complex-code-debugging turns out to be essentially “AGI-complete”, i.e. it may require a sufficiently varied mix of exploration, logical reasoning, code analysis, etc. that you pretty much have to be a general AGI to be able to do it well.
I understand you might be skeptical that it can be done but I encourage you to red-team your position, and ask yourself ‘how would I do it, if I were an AI lab hell-bent on winning the AGI race?’ You might be able to think of some things.
In a nearby universe, I would be fundraising for a startup to do exactly that, it sounds like a hell of fun problem. :-) And I’m sure you’re right… I just wouldn’t expect to get to “capable of 99% of all remote work” within four years.
I realize you’re not explicitly labeling this as a prediction, but… isn’t this precisely the sort of thought process to which Hofstadter’s Law applies?
Indeed. Like I said, my timelines are based on a portfolio of different models/worlds; the very short-timelines models/worlds are basically like “look we basically already have the ingredients, we just need to assemble them, here is how to do it...” and the planning fallacy / hofstadter’s law 100% applies to this. The 5-year-and-beyond worlds are not like that; they are … looking at lines on graphs and then extrapolating them …
So my timelines do indeed take into account Hofstadter’s Law. If I wasn’t accounting for it already, my median would be lower than 2027. However, I am open to the criticism that maybe I am not accounting for it enough.
To be clear, I’m only attempting to argue about the short-timeline worlds. I agree that Hofstadter’s Law doesn’t apply to curve extrapolation. (My intuition for 5-year-and-beyond worlds is more like Ege’s, but I have nothing coherent to add to the discussion on that front.) And so, yes, I think my position boils down to “I believe that, in your short-timeline worlds, you are not accounting for Hofstadter’s Law enough”.
As you proposed, I think the interesting place to go from here would be some predictions. I’ll noodle on this, and I’d be very interested to hear any thoughts you have – milestones along the path you envision in your default model of what rapid progress looks like; or at least, whatever implications thereof you feel comfortable talking about.
Oooh, I should have thought to ask you this earlier—what numbers/credences would you give for the stages in my scenario sketched in the OP? This might help narrow things down. My guess based on what you’ve said is that the biggest update for you would be Step 2, because that’s when it’s clear we have a working method for training LLMs to be continuously-running agents—i.e. long-term memory and continuous/exploratory work processes.
Thanks for this thoughtful and detailed and object-level critique! Just the sort of discussion I hope to inspire. Strong-upvoted.
Here are my point-by-point replies:
I agree that if no progress is made on long-term memory and iterative/exploratory work processes, we won’t have AGI. My position is that we are already seeing significant progress in these dimensions and that we will see more significant progress in the next 1-3 years. (If 4 years from now we haven’t seen such progress I’ll admit I was totally wrong about something). Maybe part of the disagreement between us is that the stuff you think are mere hacky workarounds, I think might work sufficiently well (with a few years of tinkering and experimentation perhaps).
Wanna make some predictions we could bet on? Some AI capability I expect to see in the next 3 years that you expect to not see?
I think I just don’t buy this. I work at OpenAI R&D. I see how the sausage gets made. I’m not saying the whole sausage is coding, I’m saying a significant part of it is, and moreover that many of the bits GPT4 currently can’t do seem to me that they’ll be doable in the next few years.
To be clear, I do NOT think that today’s systems could replace 99% of remote jobs even with a century of schlep. And in particular I don’t think they are capable of massively automating AI R&D even with a century of schlep. I just think they could be producing, say, at least an OOM more economic value. My analogy here is to the internet; my understanding is that there were a bunch of apps that are super big now (amazon? tinder? twitter?) that were technically feasible on the hardware of 2000, but which didn’t just spring into the world fully formed in 2000 -- instead it took time for startups to form, ideas to be built and tested, markets to be disrupted, etc.
I define schlep the same way you do, I think.
What I predict will happen is basically described in the scenario I gave in the OP, though I think it’ll probably take slightly longer than that. I don’t want to say much detail I’m afraid because it might give the impression that I’m leaking OAI secrets (even though, to be clear, I’ve had these views since before I joined OAI)
This was a nice answer from Ege. My follow up questions would be: Why? I have theories about what coherence is and why current models often lose it over long chains of thought (spoiler: they weren’t trained to have trains of thought) and theories about why they aren’t already excellent complex-code-debuggers (spoiler: they weren’t trained to be) etc. What’s your theory for why all the things AI labs will try between now and 2030 to make AIs good at these things will fail? Base models (gpt-3, gpt-4, etc.) aren’t out-of-the-box good at being helpful harmless chatbots or useful coding assistants. But with a bunch of tinkering and RLHF, they became good, and now they are used in the real world by a hundred million people a day. Again though I don’t want to get into details. I understand you might be skeptical that it can be done but I encourage you to red-team your position, and ask yourself ‘how would I do it, if I were an AI lab hell-bent on winning the AGI race?’ You might be able to think of some things. And if you can’t, I’d love to hear your thoughts on why it’s not possible. You might be right.
Indeed. Like I said, my timelines are based on a portfolio of different models/worlds; the very short-timelines models/worlds are basically like “look we basically already have the ingredients, we just need to assemble them, here is how to do it...” and the planning fallacy / hofstadter’s law 100% applies to this. The 5-year-and-beyond worlds are not like that; they are more like extrapolating trends and saying “sure looks like by 2030 we’ll have AIs that are superhuman at X, Y, Z, … heck all of our current benchmarks. And because of the way generalization/transfer/etc. and ML works they’ll probably also be broadly capable at stuff, not just narrowly good at these benchmarks. Hmmm. Seems like that could be AGI.” Note the absence of a plan here, I’m just looking at lines on graphs and then extrapolating them and then trying to visualize what the absurdly high values on those graphs mean for fuzzier stuff that isn’t being measured yet.
So my timelines do indeed take into account Hofstadter’s Law. If I wasn’t accounting for it already, my median would be lower than 2027. However, I am open to the criticism that maybe I am not accounting for it enough. However I am NOT open to the criticism that I should e.g. add 10 years to my timelines because of this. For reasons just explained. It’s a sort of “double or triple how long you think it’ll take to complete the plan” sort of thing, not a “10x how long you think it’ll take to complete the plan” sort of thing, and even if it was, then I’d just ditch the plan and look at the graphs.
Likewise, thanks for the thoughtful and detailed response. (And I hope you aren’t too impacted by current events...)
Sure, that’d be fun, and seems like about the only reasonable next step on this branch of the conversation. Setting good prediction targets is difficult, and as it happens I just blogged about this. Off the top of my head, predictions could be around the ability of a coding AI to work independently over an extended period of time (at which point, it is arguably an “engineering AI”). Two different ways of framing it:
An AI coding assistant can independently complete 80% of real-world tasks that would take X amount of time for a reasonably skilled engineer who is already familiar with the general subject matter and the project/codebase to which the task applies.
An AI coding assistant can usefully operate independently for X amount of time, i.e. it is often productive to assign it a task and allow it to process for X time before checking in on it.
At first glance, (1) strikes me as a better, less-ambiguous framing. Of course it becomes dramatically more or less ambitious depending on X, also the 80% could be tweaked but I think this is less interesting (low percentages allow for a fluky, unreliable AI to pass the test; very high percentages seem likely to require superhuman performance in a way that is not relevant to what we’re trying to measure here).
It would be nice to have some prediction targets that more directly get at long-term memory and iterative/exploratory work processes, but as I discuss in the blog post, I don’t know how to construct such a target – open to suggestions.
Intuitively, I struggle with this, but you have inside data and I do not. Maybe we just set this point aside for now, we have plenty of other points we can discuss.
This, I would agree with. And on re-reading, I think I may have been mixed up as to what you and Ajeya were saying in the section I was quoting from here, so I’ll drop this.
I would not confidently argue that it won’t happen by 2030; I am suggesting that these problems are unlikely to be well solved in a usable-in-the-field form by 2027 (four years from now). My thinking:
The rapid progress in LLM capabilities has been substantially fueled by the availability of stupendous amounts of training data.
There is no similar abundance of low-hanging training data for extended (day/week/more) chains of thought, nor for complex debugging tasks. Hence, it will not be easy to extend LLMs (and/or train some non-LLM model) to high performance at these tasks.
A lot of energy will go into the attempt, which will eventually succeed. But per (2), I think some new techniques will be needed, which will take time to identify, refine, scale, and productize; a heavy lift in four years. (Basically: Hofstadter’s Law.)
Especially because I wouldn’t be surprised if complex-code-debugging turns out to be essentially “AGI-complete”, i.e. it may require a sufficiently varied mix of exploration, logical reasoning, code analysis, etc. that you pretty much have to be a general AGI to be able to do it well.
In a nearby universe, I would be fundraising for a startup to do exactly that, it sounds like a hell of fun problem. :-) And I’m sure you’re right… I just wouldn’t expect to get to “capable of 99% of all remote work” within four years.
To be clear, I’m only attempting to argue about the short-timeline worlds. I agree that Hofstadter’s Law doesn’t apply to curve extrapolation. (My intuition for 5-year-and-beyond worlds is more like Ege’s, but I have nothing coherent to add to the discussion on that front.) And so, yes, I think my position boils down to “I believe that, in your short-timeline worlds, you are not accounting for Hofstadter’s Law enough”.
As you proposed, I think the interesting place to go from here would be some predictions. I’ll noodle on this, and I’d be very interested to hear any thoughts you have – milestones along the path you envision in your default model of what rapid progress looks like; or at least, whatever implications thereof you feel comfortable talking about.
Oooh, I should have thought to ask you this earlier—what numbers/credences would you give for the stages in my scenario sketched in the OP? This might help narrow things down. My guess based on what you’ve said is that the biggest update for you would be Step 2, because that’s when it’s clear we have a working method for training LLMs to be continuously-running agents—i.e. long-term memory and continuous/exploratory work processes.