Here’s the simplest reason to take short timelines seriously:
We don’t know how easy it might be to create AGI.
It’s often said we don’t know how hard, and that prject itmes are routinely understimated. But it’s equally true that we don’t know how easy it might turn out to be, and what synergies we’ll find across techniques and tools that will speed progress.
More specific reasoning points to short timelines for me, and at the very least to the strong possibility.
We now have a system that performs like a 140 IQ human (give or take a lot) for most cognitive tasks framed in language. There are notable gaps where these systems perform worse than humans. We have other systems that can turn sensory input, both human and otherwise, into language.
How could anyone be sure it won’t be easy fill those gaps? There are obvious measures like episodic memory and executive functioning scaffolding, and putting those together with external tools including (but far from limited to) sensory networks.
I’ve been building neural network models of brain function since 1999. It looks to me like we need zero breakthroughs to reproduce (functionally, in a loose analog) human brain function in networks. The remainder is just schlep- scaling, scaffolding, and combinations of techniques. I could easily be wrong. But it seems like we should at least be taking the possibility very seriously.
Here’s the simplest reason to take short timelines seriously:
We don’t know how easy it might be to create AGI.
It’s often said we don’t know how hard, and that prject itmes are routinely understimated. But it’s equally true that we don’t know how easy it might turn out to be, and what synergies we’ll find across techniques and tools that will speed progress.
More specific reasoning points to short timelines for me, and at the very least to the strong possibility.
We now have a system that performs like a 140 IQ human (give or take a lot) for most cognitive tasks framed in language. There are notable gaps where these systems perform worse than humans. We have other systems that can turn sensory input, both human and otherwise, into language.
How could anyone be sure it won’t be easy fill those gaps? There are obvious measures like episodic memory and executive functioning scaffolding, and putting those together with external tools including (but far from limited to) sensory networks.
I’ve been building neural network models of brain function since 1999. It looks to me like we need zero breakthroughs to reproduce (functionally, in a loose analog) human brain function in networks. The remainder is just schlep- scaling, scaffolding, and combinations of techniques. I could easily be wrong. But it seems like we should at least be taking the possibility very seriously.