Most experts expect AGI between 2030 and 2060, so predictions before 2030 are definitely in the minority.
My own take is that a lot of current research is focused on scaling, and has found that deep learning scales quite well to very large sizes. This finding is replicated in evolutionary studies; one of the main differences between the human brain and the chimpanzee is just size (neuron count), pure and simple.
As a result, the main limiting factor thus appears to be the amount of hardware that we can throw at the problem. Current research into large models is very much hardware limited, with only the major labs (Google, DeepMind, OpenAI, etc.) able to afford the compute costs to train large models. Iterating on model architecture at large scales is hard because of the costs involved. Thus, I personally predict that we will achieve AGI only when the cost of compute drops to the point where FLOPs roughly equivalent to the human brain can be purchased on a more modest budget; the drop in price will open up the field to more experimentation.
We do not have AGI yet even on current supercomputers, but it’s starting to look like we might be getting close (close = factor of 10 or 100). Assuming continuing progress in Moore’s law (not at all guaranteed), another 15-20 years will lead to another 1000x drop in the cost of compute, which is probably enough for numerous smaller labs with smaller budgets to really start experimenting. The big labs will have a few years head start, but if they don’t figure it out, then they will be well positioned to scale into super-intelligent territory immediately as soon as the small labs help make whatever breakthroughs are required. The longer it takes to solve the software problem, the more hardware we’ll have to scale immediately, which means faster foom. Getting AGI sooner may thus yield a better outcome.
I would tentatively put the date at around 2035, +/- 5 years.
If we run into a roadblock that requires substantially new techniques (e.g., gradient descent isn’t enough) then the timeline could be pushed back. However, I haven’t seen much evidence that we’ve hit any fundamental algorithmic limitations yet.
For a survey of experts, see:
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
Most experts expect AGI between 2030 and 2060, so predictions before 2030 are definitely in the minority.
My own take is that a lot of current research is focused on scaling, and has found that deep learning scales quite well to very large sizes. This finding is replicated in evolutionary studies; one of the main differences between the human brain and the chimpanzee is just size (neuron count), pure and simple.
As a result, the main limiting factor thus appears to be the amount of hardware that we can throw at the problem. Current research into large models is very much hardware limited, with only the major labs (Google, DeepMind, OpenAI, etc.) able to afford the compute costs to train large models. Iterating on model architecture at large scales is hard because of the costs involved. Thus, I personally predict that we will achieve AGI only when the cost of compute drops to the point where FLOPs roughly equivalent to the human brain can be purchased on a more modest budget; the drop in price will open up the field to more experimentation.
We do not have AGI yet even on current supercomputers, but it’s starting to look like we might be getting close (close = factor of 10 or 100). Assuming continuing progress in Moore’s law (not at all guaranteed), another 15-20 years will lead to another 1000x drop in the cost of compute, which is probably enough for numerous smaller labs with smaller budgets to really start experimenting. The big labs will have a few years head start, but if they don’t figure it out, then they will be well positioned to scale into super-intelligent territory immediately as soon as the small labs help make whatever breakthroughs are required. The longer it takes to solve the software problem, the more hardware we’ll have to scale immediately, which means faster foom. Getting AGI sooner may thus yield a better outcome.
I would tentatively put the date at around 2035, +/- 5 years.
If we run into a roadblock that requires substantially new techniques (e.g., gradient descent isn’t enough) then the timeline could be pushed back. However, I haven’t seen much evidence that we’ve hit any fundamental algorithmic limitations yet.