I think the gaps between where we are and human-level (and broadly but not precisely human-like) cognition are smaller than they appear. Modest improvements in to-date neglected cognitive systems can allow LLMs to apply their cognitive abilities in more ways, allowing more human-like routes to performance and learning. These strengths will build on each other nonlinearly (while likely also encountering unexpected roadblocks).
Timelines are thus very difficult to predict, but ruling out very short timelines based on averaging predictions without gears-level models of fast routes to AGI would be a big mistake. Whether and how quickly they work is an empirical question.
One blocker to taking short timelines seriously is the belief that fast timelines mean likely human extinction. I think they’re extremely dangerous but that possible routes to alignment also exist—but that’s a separate question.
I also think this is the current default path, or I wouldn’t describe it.
I think my research career using deep nets and cognitive architectures to understand human cognition is pretty relevant for making good predictions on this path to AGI. But I’m biased, just like everyone else.
Anyway, here’s very roughly why I think the gaps are smaller than they appear.
Current LLMs are like humans with excellent:
language abilities,
semantic memory
working memory
They can now do almost all short time-horizon tasks that are framed in language better than humans. And other networks can translate real-world systems into language and code, where humans haven’t already done it.
But current LLMs/foundation models are dramatically missing some human cognitive abilities:
Almost no episodic memory for specific important experiences
No agency—they do only what they’re told
Poor executive function (self-management of cognitive tasks)
Relatedly, bad/incompetent at long time-horizon tasks.
And zero continuous learning (and self-directed learning)
Crucial for human performance on complex tasks
Those lacks would appear to imply long timelines.
But both long time-horizon tasks and self-directed learning are fairly easy to reach. The gaps are not as large as they appear.
Agency is as simple as repeatedly calling a prompt of “act as an agent working toward goal X; use tools Y to gather information and take actions as appropriate”. The gap between a good oracle and an effective agent is almost completely illusory.
Episodic memory is less trivial, but still relatively easy to improve from current near-zero-effort systems. Efforts from here will likely build on LLMs strengths. I’ll say no more publicly; DM me for details. But it doesn’t take a PhD in computational neuroscience to rederive this, which is the only reason I’m mentioning it publicly. More on infohazards later.
Now to the capabilities payoff: long time-horizon tasks and continuous, self-directed learning.
Long time-horizon task abilities are an emergent product of episodic memory and general cognitive abilities. LLMs are “smart” enough to manage their own thinking; they don’t have instructions or skills to do it. o1 appears to have those skills (although no episodic memory which is very helpful in managing multiple chains of thought), so similar RL training on Chains of Thought is probably one route achieving those.
Humans do not mostly perform long time-horizon tasks by trying them over and over. They either ask someone how to do it, then memorize and reference those strategies with episodic memory; or they perform self-directed learning, and pose questions and form theories to answer those same questions.
Humans do not have or need “9s of reliability” to perform long time-horizon tasks. We substitute frequent error-checking and error-correction. We then learn continuously on both strategy (largely episodic memory) and skills/habitual learning (fine-tuning LLMs already provides a form of this habitization of explicit knowledge to fast implicit skills).
Continuous, self-directed learning is a product of having any type of new learning (memory), and using some of the network/agents’ cognitive abilities to decide what’s worth learning. This learning could be selective fine-tuning (like o1s “deliberative alignment), episodic memory, or even very long context with good access as a first step. This is how humans master new tasks, along with taking instruction wisely. This would be very helpful for mastering economically viable tasks, so I expect real efforts put into mastering it.
Self-directed learning would also be critical for an autonomous agent to accomplish entirely novel tasks, like taking over the world.
This is why I expect “Real AGI” that’s agentic and learns on its own, and not just transformative tool “AGI” within the next five years (or less). It’s easy and useful, and perhaps the shortest path to capabilities (as with humans teaching themselves).
If that happens, I don’t think we’re necessarily doomed, even without much new progress on alignment (although we would definitely improve our odds!). We are already teaching LLMs mostly to answer questions correctly and to follow instructions. As long as nobody gives their agent an open-ended top-level goal like “make me lots of money”, we might be okay. Instruction-following AGI is easier and more likely than value aligned AGI although I need to work through and clarify why I find this so central. I’d love help.
Convincing predictions are also blueprints for progress. Thus, I have been hesitant to say all of that clearly.
But I’m increasingly convinced that all of this stuff is going to quickly become obvious to any team that sits down and starts thinking seriously about how to get from where we are to really useful capabilities. And more talented teams are steadily doing just that.
I now think it’s more important that the alignment community takes short timelines more seriously, rather than hiding our knowledge in hopes that it won’t be quickly rederived. There are more and more smart and creative people working directly toward AGI. We should not bet on their incompetence.
There could certainly be unexpected theoretical obstacles. There will certainly be practical obstacles. But even with expected discounts for human foibles and idiocy and unexpected hurdles, timelines are not long. We should not assume that any breakthroughs are necessary, or that we have spare time to solve alignment adequately to survive.
I think the gaps between where we are and human-level (and broadly but not precisely human-like) cognition are smaller than they appear. Modest improvements in to-date neglected cognitive systems can allow LLMs to apply their cognitive abilities in more ways, allowing more human-like routes to performance and learning. These strengths will build on each other nonlinearly (while likely also encountering unexpected roadblocks).
Timelines are thus very difficult to predict, but ruling out very short timelines based on averaging predictions without gears-level models of fast routes to AGI would be a big mistake. Whether and how quickly they work is an empirical question.
One blocker to taking short timelines seriously is the belief that fast timelines mean likely human extinction. I think they’re extremely dangerous but that possible routes to alignment also exist—but that’s a separate question.
I also think this is the current default path, or I wouldn’t describe it.
I think my research career using deep nets and cognitive architectures to understand human cognition is pretty relevant for making good predictions on this path to AGI. But I’m biased, just like everyone else.
Anyway, here’s very roughly why I think the gaps are smaller than they appear.
Current LLMs are like humans with excellent:
language abilities,
semantic memory
working memory
They can now do almost all short time-horizon tasks that are framed in language better than humans. And other networks can translate real-world systems into language and code, where humans haven’t already done it.
But current LLMs/foundation models are dramatically missing some human cognitive abilities:
Almost no episodic memory for specific important experiences
No agency—they do only what they’re told
Poor executive function (self-management of cognitive tasks)
Relatedly, bad/incompetent at long time-horizon tasks.
And zero continuous learning (and self-directed learning)
Crucial for human performance on complex tasks
Those lacks would appear to imply long timelines.
But both long time-horizon tasks and self-directed learning are fairly easy to reach. The gaps are not as large as they appear.
Agency is as simple as repeatedly calling a prompt of “act as an agent working toward goal X; use tools Y to gather information and take actions as appropriate”. The gap between a good oracle and an effective agent is almost completely illusory.
Episodic memory is less trivial, but still relatively easy to improve from current near-zero-effort systems. Efforts from here will likely build on LLMs strengths. I’ll say no more publicly; DM me for details. But it doesn’t take a PhD in computational neuroscience to rederive this, which is the only reason I’m mentioning it publicly. More on infohazards later.
Now to the capabilities payoff: long time-horizon tasks and continuous, self-directed learning.
Long time-horizon task abilities are an emergent product of episodic memory and general cognitive abilities. LLMs are “smart” enough to manage their own thinking; they don’t have instructions or skills to do it. o1 appears to have those skills (although no episodic memory which is very helpful in managing multiple chains of thought), so similar RL training on Chains of Thought is probably one route achieving those.
Humans do not mostly perform long time-horizon tasks by trying them over and over. They either ask someone how to do it, then memorize and reference those strategies with episodic memory; or they perform self-directed learning, and pose questions and form theories to answer those same questions.
Humans do not have or need “9s of reliability” to perform long time-horizon tasks. We substitute frequent error-checking and error-correction. We then learn continuously on both strategy (largely episodic memory) and skills/habitual learning (fine-tuning LLMs already provides a form of this habitization of explicit knowledge to fast implicit skills).
Continuous, self-directed learning is a product of having any type of new learning (memory), and using some of the network/agents’ cognitive abilities to decide what’s worth learning. This learning could be selective fine-tuning (like o1s “deliberative alignment), episodic memory, or even very long context with good access as a first step. This is how humans master new tasks, along with taking instruction wisely. This would be very helpful for mastering economically viable tasks, so I expect real efforts put into mastering it.
Self-directed learning would also be critical for an autonomous agent to accomplish entirely novel tasks, like taking over the world.
This is why I expect “Real AGI” that’s agentic and learns on its own, and not just transformative tool “AGI” within the next five years (or less). It’s easy and useful, and perhaps the shortest path to capabilities (as with humans teaching themselves).
If that happens, I don’t think we’re necessarily doomed, even without much new progress on alignment (although we would definitely improve our odds!). We are already teaching LLMs mostly to answer questions correctly and to follow instructions. As long as nobody gives their agent an open-ended top-level goal like “make me lots of money”, we might be okay. Instruction-following AGI is easier and more likely than value aligned AGI although I need to work through and clarify why I find this so central. I’d love help.
Convincing predictions are also blueprints for progress. Thus, I have been hesitant to say all of that clearly.
I said some of this at more length in Capabilities and alignment of LLM cognitive architectures and elsewhere. But I didn’t publish it in my previous neuroscience career nor have I elaborated since then.
But I’m increasingly convinced that all of this stuff is going to quickly become obvious to any team that sits down and starts thinking seriously about how to get from where we are to really useful capabilities. And more talented teams are steadily doing just that.
I now think it’s more important that the alignment community takes short timelines more seriously, rather than hiding our knowledge in hopes that it won’t be quickly rederived. There are more and more smart and creative people working directly toward AGI. We should not bet on their incompetence.
There could certainly be unexpected theoretical obstacles. There will certainly be practical obstacles. But even with expected discounts for human foibles and idiocy and unexpected hurdles, timelines are not long. We should not assume that any breakthroughs are necessary, or that we have spare time to solve alignment adequately to survive.