AGI is happening soon. Significant probability of it happening in less than 5 years.
I agree that there is at least some probability of AGI within 5 years, and my median is something like 8-9 years (which is significantly advanced vs most of the research community, and also most of the alignment/safety/LW community afaik).
Yet I think that the following statements are not at all isomorphic to the above, and are indeed—in my view—absurdly far off the mark:
We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down.
If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources?
Let’s look at some examples for why.
DeepMind’s AlphaGo—took at least 1.5 years of development to get to human professional standard, possibly closer to 2 years.
DeepMind’s AlphaFold—essentially a simple supervised learning problem at its core—was an internal project for at least 3 years before culminating in the Nature paper version.
OpenAIs DOTA-playing OpenAI Five again took at least 2.5 years of development to get to human professional level (arguably sub-professional, after humans had more time to adapt to its playstyle) on a restricted format of the game.
In all 3 cases, the teams were large, well-funded, and focused throughout the time periods on the problem domain.
One may argue that a) these happened in the past, and AI resources/compute/research-iteration-speed are all substantially better now, and b) the above projects did not have the singular focus of the entire organisation. And I would accept these arguments. However, the above are both highly constrained problems, and ones with particularities eminently well suited to modern AI techniques. The space of ‘all possible obstacles’ and ‘all problems’ is significantly more vast than the above.
I wonder what model of AI R&D you guys have that gives you the confidence to make such statements in the face of what seems to me to be strong contrary empirical evidence.
I agree that there is at least some probability of AGI within 5 years, and my median is something like 8-9 years (which is significantly advanced vs most of the research community, and also most of the alignment/safety/LW community afaik).
Yet I think that the following statements are not at all isomorphic to the above, and are indeed—in my view—absurdly far off the mark:
Let’s look at some examples for why.
DeepMind’s AlphaGo—took at least 1.5 years of development to get to human professional standard, possibly closer to 2 years.
DeepMind’s AlphaFold—essentially a simple supervised learning problem at its core—was an internal project for at least 3 years before culminating in the Nature paper version.
OpenAIs DOTA-playing OpenAI Five again took at least 2.5 years of development to get to human professional level (arguably sub-professional, after humans had more time to adapt to its playstyle) on a restricted format of the game.
In all 3 cases, the teams were large, well-funded, and focused throughout the time periods on the problem domain.
One may argue that a) these happened in the past, and AI resources/compute/research-iteration-speed are all substantially better now, and b) the above projects did not have the singular focus of the entire organisation. And I would accept these arguments. However, the above are both highly constrained problems, and ones with particularities eminently well suited to modern AI techniques. The space of ‘all possible obstacles’ and ‘all problems’ is significantly more vast than the above.
I wonder what model of AI R&D you guys have that gives you the confidence to make such statements in the face of what seems to me to be strong contrary empirical evidence.