Among organizations, both OpenAI and DeepMind are aiming at AGI and seem confident they will get there. I don’t know their internal timelines and don’t know if they’ve stated them...
There are numerous big corporate research labs: OpenAI, DeepMind, Google Research, Facebook AI (Meta), plus lots of academic labs.
The rate of progress has been accelerating. From 1960 − 2010 progress was incremental, and remained centered around narrow problems (chess) or toy problems. Since 2015, progress has been very rapid, driven mainly by new hardware and big data. Long-standing hard problems in ML/AI, such as go, image understanding, language translation, logical reasoning, etc. seem to fall on an almost monthly basis now, and huge amounts of money and intellect are being thrown at the field. The rate of advance from 2015-2022 (only 7 years) has been phenomenal; given another 30, it’s hard to imagine that we wouldn’t reach an inflection point of some kind.
I think the burden of proof is now on those who don’t believe that 30 years is enough time to crack AGI. You would have to postulate some fundamental difficulty, like finding out that the human brain is doing things that can’t be done in silicon, that would somehow arrest the current rate of progress and lead to a new “AI winter.”
Historically, AI researchers have often been overconfident. But this time does feel different.
Who are the AI Capabilities researchers trying to build AGI and think they will succeed within the next 30 years?
Among organizations, both OpenAI and DeepMind are aiming at AGI and seem confident they will get there. I don’t know their internal timelines and don’t know if they’ve stated them...
There are numerous big corporate research labs: OpenAI, DeepMind, Google Research, Facebook AI (Meta), plus lots of academic labs.
The rate of progress has been accelerating. From 1960 − 2010 progress was incremental, and remained centered around narrow problems (chess) or toy problems. Since 2015, progress has been very rapid, driven mainly by new hardware and big data. Long-standing hard problems in ML/AI, such as go, image understanding, language translation, logical reasoning, etc. seem to fall on an almost monthly basis now, and huge amounts of money and intellect are being thrown at the field. The rate of advance from 2015-2022 (only 7 years) has been phenomenal; given another 30, it’s hard to imagine that we wouldn’t reach an inflection point of some kind.
I think the burden of proof is now on those who don’t believe that 30 years is enough time to crack AGI. You would have to postulate some fundamental difficulty, like finding out that the human brain is doing things that can’t be done in silicon, that would somehow arrest the current rate of progress and lead to a new “AI winter.”
Historically, AI researchers have often been overconfident. But this time does feel different.