Thanks, this is the kind of comment that tries to break down things by missing capabilities that I was hoping to see.
Episodic memory is less trivial, but still relatively easy to improve from current near-zero-effort systems
I agree that it’s likely to be relatively easy to improve from current systems, but just improving it is a much lower bar than getting episodic memory to actually be practically useful. So I’m not sure why this alone would imply a very short timeline. Getting things from “there are papers about this in the literature” to “actually sufficient for real-world problems” often takes a significant time, e.g.:
I believe that chain-of-thought prompting was introduced in a NeurIPS 2022 paper. Going from there to a model that systematically and rigorously made use of it (o1) took about two years, even though the idea was quite straightforward in principle.
After the 2007 DARPA Grand Challenge there was a lot of hype about how self-driving cars were just around the corner, but almost two decades later, they’re basically still in beta.
My general prior is that this kind of work—from conceptual prototype to robust real-world application—can in general easily take between years to decades, especially once we move out of domains like games/math/programming and into ones that are significantly harder to formalize and test. Also, the more interacting components you have, the trickier it gets to test and train.
I think the intelligence inherent in LLMs will make episodic memory systems useful immediately. The people I know building chatbots with persistent memory were already finding it useful with vector databases, it just topped out in capacity quickly by slowing down memory search too much. And that was as of a year ago.
I don’t think I managed to convey one central point, which is that reflection and continuous learning together can fill a lot of cognitive gaps. I think they do for humans. We can analyze our own thinking and then use new strategies where appropriate. It seems like the pieces are all there for LLM cognitive architectures to do this as well. Such a system will still take time to dramatically self-improve by re-engineering its base systems. But there’s a threshold of general intelligence and self-directed learning in which a system can self-correct and self-improve in limited but highly useful ways, so that its designers don’t have to fix very flaw by hand, and it can just back up and try again differently.
I don’t like the term unhobbling because it’s more like adding cognitive tools that make new uses of LLMs considerable flexible intelligence.
All of the continuous learning approaches that would enable self-directed continuous are clunky now but there are no obvious roadblocks to their improving rapidly when a few competent teams start working on them full-time. And since there are multiple approaches already in play, there’s a better chance some combination become useful quickly.
Yes, refining it and other systems will take time. But counting on it being a long time doesn’t seem sensible. I am considering writing the complementary post, “what are the best arguments for long timelines”. I’m curious. I expect the strongest ones to support something like five year timelines to what I consider AGI—which importantly is fully general in that it can learn new things, but will not meet the bar of doing 95% of remote jobs because it’s not likely to be human-level in all areas right away.
I focus on that definition because it seems like a fairly natural category shift from limited tool AI to sapience and understanding in “Real AGI” that roughly match our intuitive understanding of humans as entities, minds, or beings that understand and can learn about themselves if they choose to.
The other reason I focus on that transition is that I expect it to function as a wake-up call to those who don’t imagine agentic AI in detail. It will match their intuitions about humans well enough for our recognition of humans as very dangerous to also apply to that type of AI. Hopefully their growth from general-and-sapient-but-dumb-in-soe-ways will be slow enough for society to adapt—months to years may be enough.
Thanks. Still not convinced, but it will take me a full post to explain why exactly. :)
Though possibly some of this is due to a difference in definitions. When you say this:
what I consider AGI—which importantly is fully general in that it can learn new things, but will not meet the bar of doing 95% of remote jobs because it’s not likely to be human-level in all areas right away
Do you have a sense of how long you expect it will take for it to go from “can learn new things” to “doing 95% of remote jobs”? If you e.g. expect that it might still take several years for AGI to master most jobs once it has been created, then that might be more compatible with my model.
I do think our models may be pretty similar once we get past slightly different definitions of AGI.
It’s pretty hard to say how fast the types of agents I’m envisioning would take off. It could be a while between what I’m calling real AGI that can learn anything, and having it learn well and quickly enough, and be smart enough, to do 95% of remote jobs. If there aren’t breakthroughs in learning and memory systems, it could take as much as three years to really start doing substantial work, and be a slow progression toward 95% of jobs as it’s taught and teaches itself new skills. The incremental improvements on existing memory systems—RAG, vector databases, and fine-tuning for skills and new knowledge—would remain clumsier than human learning for a while.
This would be potentially very good for safety. Semi-competent agents that aren’t yet takeover-capable might wake people up to the alignment and safety issues. And I’m optimistic about the agent route for technical alignment; of course that’s a more complex issue. Intent alignment as a stepping-stone to value alignment gives the broad outline and links to more work on how instruction-following language model agents might bypass some of the worst concerns about goal mis-specification and mis-generalization and risks from optimization.
You made a good point in the linked comment that these systems will be clumsier to train and improve if they have more moving parts. My impression from the little information I have on agent projects is that this is true. But I haven’t heard of a large and skilled team taking on this task yet; it will be interesting to see what one can do. And at some point, an agent directing its own learning and performance gains an advantage that can offset the disadvantage of being harder for humans to improve and optimize the underlying system.
I look forward to that post if you get around to writing it. I’ve been toying with the idea of writing a more complete post on my short timelines and slow takeoff scenario. Thanks for posing the question and getting me to dash off a short version at least.
I’d argue that self-driving cars were essentially solved by Waymo in 2021-2024, and to a lesser extent I’d include Tesla in this too, and that a lot of the reason why self-driving cars aren’t on the roads is because of liability issues, so in essence self-driving cars came 14-17 years after the DARPA grand challenge.
Hmm, some years back I was hearing the claim that self-driving cars work badly in winter conditions, so are currently limited to the kinds of warmer climates where Waymo is operating. I haven’t checked whether that’s still entirely accurate, but at least I haven’t heard any news of this having made progress.
My guess is that a large portion of the “works badly in winter conditions” issue is closer to it does work reasonably well in winter climates, but it doesn’t work so well that you can’t be sued/have liability issues.
I’d argue the moral of self-driving cars is regulation can slow down tech considerably, which does have implications for AI policy.
Thanks, this is the kind of comment that tries to break down things by missing capabilities that I was hoping to see.
I agree that it’s likely to be relatively easy to improve from current systems, but just improving it is a much lower bar than getting episodic memory to actually be practically useful. So I’m not sure why this alone would imply a very short timeline. Getting things from “there are papers about this in the literature” to “actually sufficient for real-world problems” often takes a significant time, e.g.:
I believe that chain-of-thought prompting was introduced in a NeurIPS 2022 paper. Going from there to a model that systematically and rigorously made use of it (o1) took about two years, even though the idea was quite straightforward in principle.
After the 2007 DARPA Grand Challenge there was a lot of hype about how self-driving cars were just around the corner, but almost two decades later, they’re basically still in beta.
My general prior is that this kind of work—from conceptual prototype to robust real-world application—can in general easily take between years to decades, especially once we move out of domains like games/math/programming and into ones that are significantly harder to formalize and test. Also, the more interacting components you have, the trickier it gets to test and train.
I think the intelligence inherent in LLMs will make episodic memory systems useful immediately. The people I know building chatbots with persistent memory were already finding it useful with vector databases, it just topped out in capacity quickly by slowing down memory search too much. And that was as of a year ago.
I don’t think I managed to convey one central point, which is that reflection and continuous learning together can fill a lot of cognitive gaps. I think they do for humans. We can analyze our own thinking and then use new strategies where appropriate. It seems like the pieces are all there for LLM cognitive architectures to do this as well. Such a system will still take time to dramatically self-improve by re-engineering its base systems. But there’s a threshold of general intelligence and self-directed learning in which a system can self-correct and self-improve in limited but highly useful ways, so that its designers don’t have to fix very flaw by hand, and it can just back up and try again differently.
I don’t like the term unhobbling because it’s more like adding cognitive tools that make new uses of LLMs considerable flexible intelligence.
All of the continuous learning approaches that would enable self-directed continuous are clunky now but there are no obvious roadblocks to their improving rapidly when a few competent teams start working on them full-time. And since there are multiple approaches already in play, there’s a better chance some combination become useful quickly.
Yes, refining it and other systems will take time. But counting on it being a long time doesn’t seem sensible. I am considering writing the complementary post, “what are the best arguments for long timelines”. I’m curious. I expect the strongest ones to support something like five year timelines to what I consider AGI—which importantly is fully general in that it can learn new things, but will not meet the bar of doing 95% of remote jobs because it’s not likely to be human-level in all areas right away.
I focus on that definition because it seems like a fairly natural category shift from limited tool AI to sapience and understanding in “Real AGI” that roughly match our intuitive understanding of humans as entities, minds, or beings that understand and can learn about themselves if they choose to.
The other reason I focus on that transition is that I expect it to function as a wake-up call to those who don’t imagine agentic AI in detail. It will match their intuitions about humans well enough for our recognition of humans as very dangerous to also apply to that type of AI. Hopefully their growth from general-and-sapient-but-dumb-in-soe-ways will be slow enough for society to adapt—months to years may be enough.
Thanks. Still not convinced, but it will take me a full post to explain why exactly. :)
Though possibly some of this is due to a difference in definitions. When you say this:
Do you have a sense of how long you expect it will take for it to go from “can learn new things” to “doing 95% of remote jobs”? If you e.g. expect that it might still take several years for AGI to master most jobs once it has been created, then that might be more compatible with my model.
I do think our models may be pretty similar once we get past slightly different definitions of AGI.
It’s pretty hard to say how fast the types of agents I’m envisioning would take off. It could be a while between what I’m calling real AGI that can learn anything, and having it learn well and quickly enough, and be smart enough, to do 95% of remote jobs. If there aren’t breakthroughs in learning and memory systems, it could take as much as three years to really start doing substantial work, and be a slow progression toward 95% of jobs as it’s taught and teaches itself new skills. The incremental improvements on existing memory systems—RAG, vector databases, and fine-tuning for skills and new knowledge—would remain clumsier than human learning for a while.
This would be potentially very good for safety. Semi-competent agents that aren’t yet takeover-capable might wake people up to the alignment and safety issues. And I’m optimistic about the agent route for technical alignment; of course that’s a more complex issue. Intent alignment as a stepping-stone to value alignment gives the broad outline and links to more work on how instruction-following language model agents might bypass some of the worst concerns about goal mis-specification and mis-generalization and risks from optimization.
You made a good point in the linked comment that these systems will be clumsier to train and improve if they have more moving parts. My impression from the little information I have on agent projects is that this is true. But I haven’t heard of a large and skilled team taking on this task yet; it will be interesting to see what one can do. And at some point, an agent directing its own learning and performance gains an advantage that can offset the disadvantage of being harder for humans to improve and optimize the underlying system.
I look forward to that post if you get around to writing it. I’ve been toying with the idea of writing a more complete post on my short timelines and slow takeoff scenario. Thanks for posing the question and getting me to dash off a short version at least.
I’d argue that self-driving cars were essentially solved by Waymo in 2021-2024, and to a lesser extent I’d include Tesla in this too, and that a lot of the reason why self-driving cars aren’t on the roads is because of liability issues, so in essence self-driving cars came 14-17 years after the DARPA grand challenge.
Hmm, some years back I was hearing the claim that self-driving cars work badly in winter conditions, so are currently limited to the kinds of warmer climates where Waymo is operating. I haven’t checked whether that’s still entirely accurate, but at least I haven’t heard any news of this having made progress.
My guess is that a large portion of the “works badly in winter conditions” issue is closer to it does work reasonably well in winter climates, but it doesn’t work so well that you can’t be sued/have liability issues.
I’d argue the moral of self-driving cars is regulation can slow down tech considerably, which does have implications for AI policy.