I have a few related questions pertaining to AGI timelines. I’ve been under the general impression that when it comes to timelines on AGI and doom, Eliezer’s predictions are based on a belief in extraordinarily fast AI development, and thus a close AGI arrival date, which I currently take to mean a quicker date of doom. I have three questions related to this matter:
For those who currently believe that AGI (using whatever definition to describe AGI as you see fit) will be arriving very soon—which, if I’m not mistaken, is what Eliezer is predicting—approximately how soon are we talking about. Is this 2-3 years soon? 10 years soon? (I know Eliezer has a bet that the world will end before 2030, so I’m trying to see if there has been any clarification of how soon before 2030)
How much does Eliezer’s views on timelines vary in comparison to other big-name AI safety researchers?
I’m currently under the impression that it takes a significant amount of knowledge of Artificial Intelligence to be able to accurately attempt to predict timelines related to AGI. Is this impression correct? And if so, would it be a good idea to reference general consensus opinions such as Metaculus when trying to frame how much time we have left?
There’s actually two different parts to the answer, and the difference is important. There is the time between now and the first AI capable of autonomously improving itself (time to AGI), and there’s the time it takes for the AI to “foom”, meaning improve itself from a roughly human level towards godhood. In EY’s view, it doesn’t matter at all how long we have between now and AGI, because foom will happen so quickly and will be so decisive that no one will be able to respond and stop it. (Maybe, if we had 200 years we could solve it, but we don’t.) In other people’s view (including Robin Hanson and Paul Christiano, I think) there will be “slow takeoff.” In this view, AI will gradually improve itself over years, probably working with human researchers in that time but progressively gathering more autonomy and skills. Hanson and Christiano agree with EY that doom is likely. In fact, in the slow takeoff view ASI might arrive even sooner than in the fast takeoff view.
Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?
Most experts expect AGI between 2030 and 2060, so predictions before 2030 are definitely in the minority.
My own take is that a lot of current research is focused on scaling, and has found that deep learning scales quite well to very large sizes. This finding is replicated in evolutionary studies; one of the main differences between the human brain and the chimpanzee is just size (neuron count), pure and simple.
As a result, the main limiting factor thus appears to be the amount of hardware that we can throw at the problem. Current research into large models is very much hardware limited, with only the major labs (Google, DeepMind, OpenAI, etc.) able to afford the compute costs to train large models. Iterating on model architecture at large scales is hard because of the costs involved. Thus, I personally predict that we will achieve AGI only when the cost of compute drops to the point where FLOPs roughly equivalent to the human brain can be purchased on a more modest budget; the drop in price will open up the field to more experimentation.
We do not have AGI yet even on current supercomputers, but it’s starting to look like we might be getting close (close = factor of 10 or 100). Assuming continuing progress in Moore’s law (not at all guaranteed), another 15-20 years will lead to another 1000x drop in the cost of compute, which is probably enough for numerous smaller labs with smaller budgets to really start experimenting. The big labs will have a few years head start, but if they don’t figure it out, then they will be well positioned to scale into super-intelligent territory immediately as soon as the small labs help make whatever breakthroughs are required. The longer it takes to solve the software problem, the more hardware we’ll have to scale immediately, which means faster foom. Getting AGI sooner may thus yield a better outcome.
I would tentatively put the date at around 2035, +/- 5 years.
If we run into a roadblock that requires substantially new techniques (e.g., gradient descent isn’t enough) then the timeline could be pushed back. However, I haven’t seen much evidence that we’ve hit any fundamental algorithmic limitations yet.
I have a few related questions pertaining to AGI timelines. I’ve been under the general impression that when it comes to timelines on AGI and doom, Eliezer’s predictions are based on a belief in extraordinarily fast AI development, and thus a close AGI arrival date, which I currently take to mean a quicker date of doom. I have three questions related to this matter:
For those who currently believe that AGI (using whatever definition to describe AGI as you see fit) will be arriving very soon—which, if I’m not mistaken, is what Eliezer is predicting—approximately how soon are we talking about. Is this 2-3 years soon? 10 years soon? (I know Eliezer has a bet that the world will end before 2030, so I’m trying to see if there has been any clarification of how soon before 2030)
How much does Eliezer’s views on timelines vary in comparison to other big-name AI safety researchers?
I’m currently under the impression that it takes a significant amount of knowledge of Artificial Intelligence to be able to accurately attempt to predict timelines related to AGI. Is this impression correct? And if so, would it be a good idea to reference general consensus opinions such as Metaculus when trying to frame how much time we have left?
There’s actually two different parts to the answer, and the difference is important. There is the time between now and the first AI capable of autonomously improving itself (time to AGI), and there’s the time it takes for the AI to “foom”, meaning improve itself from a roughly human level towards godhood. In EY’s view, it doesn’t matter at all how long we have between now and AGI, because foom will happen so quickly and will be so decisive that no one will be able to respond and stop it. (Maybe, if we had 200 years we could solve it, but we don’t.) In other people’s view (including Robin Hanson and Paul Christiano, I think) there will be “slow takeoff.” In this view, AI will gradually improve itself over years, probably working with human researchers in that time but progressively gathering more autonomy and skills. Hanson and Christiano agree with EY that doom is likely. In fact, in the slow takeoff view ASI might arrive even sooner than in the fast takeoff view.
I’m not sure about Hanson, but Christiano is a lot more optimistic than EY.
Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?
For a survey of experts, see:
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
Most experts expect AGI between 2030 and 2060, so predictions before 2030 are definitely in the minority.
My own take is that a lot of current research is focused on scaling, and has found that deep learning scales quite well to very large sizes. This finding is replicated in evolutionary studies; one of the main differences between the human brain and the chimpanzee is just size (neuron count), pure and simple.
As a result, the main limiting factor thus appears to be the amount of hardware that we can throw at the problem. Current research into large models is very much hardware limited, with only the major labs (Google, DeepMind, OpenAI, etc.) able to afford the compute costs to train large models. Iterating on model architecture at large scales is hard because of the costs involved. Thus, I personally predict that we will achieve AGI only when the cost of compute drops to the point where FLOPs roughly equivalent to the human brain can be purchased on a more modest budget; the drop in price will open up the field to more experimentation.
We do not have AGI yet even on current supercomputers, but it’s starting to look like we might be getting close (close = factor of 10 or 100). Assuming continuing progress in Moore’s law (not at all guaranteed), another 15-20 years will lead to another 1000x drop in the cost of compute, which is probably enough for numerous smaller labs with smaller budgets to really start experimenting. The big labs will have a few years head start, but if they don’t figure it out, then they will be well positioned to scale into super-intelligent territory immediately as soon as the small labs help make whatever breakthroughs are required. The longer it takes to solve the software problem, the more hardware we’ll have to scale immediately, which means faster foom. Getting AGI sooner may thus yield a better outcome.
I would tentatively put the date at around 2035, +/- 5 years.
If we run into a roadblock that requires substantially new techniques (e.g., gradient descent isn’t enough) then the timeline could be pushed back. However, I haven’t seen much evidence that we’ve hit any fundamental algorithmic limitations yet.