This graph nicely summarizes his timeline from Mind Children in 1988. The book itself presents his view that AI progress is primarily constrained by compute power available to most researchers, which is usually around that of a PC.
Moravec et al were correct in multiple key disagreements with EY et al:
That progress was smooth and predictable from Moore’s Law (similar to how the arrival of flight is postdictable from ICE progress)
That AGI would be based on brain-reverse engineering, and thus will be inherently anthropomorphic
That “recursive self-improvement” was mostly relevant only in the larger systemic sense (civilization level)
LLMs are far more anthropomorphic (brain-like) than the fast clean consequential reasoners EY expected:
close correspondence to linguistic cortex (internal computations and training objective)
complete with human-like cognitive biases!
unexpected human-like limitations: struggle with simple tasks like arithmetic, longer term planning, etc
AGI misalignment insights from jungian psychology more effective/useful/popular than MIRI’s core research
All of this was predicted from the systems/cybernetic framework/rubric that human minds are software constructs, brains are efficient and tractable, and thus AGI is mostly about reverse engineering the brain and then downloading/distilling human mindware into the new digital substrate.
I don’t know if the graph settles the question—is Moravec predicting AGI at “Human equivalence in a supercomputer” or “Human equivalence in a personal computer”? Hard to say from the graph.
The fact that he specifically talks about “compute power available to most researchers” makes it more clear what his predictions are. Taken literally that view would suggest something like: a trillion dollar computing budget spread across 10k researchers in 2010 would result in AGI in not-too-long, which looks a bit less plausible as a prediction but not out of the question.
This graph nicely summarizes his timeline from Mind Children in 1988. The book itself presents his view that AI progress is primarily constrained by compute power available to most researchers, which is usually around that of a PC.
Moravec et al were correct in multiple key disagreements with EY et al:
That progress was smooth and predictable from Moore’s Law (similar to how the arrival of flight is postdictable from ICE progress)
That AGI would be based on brain-reverse engineering, and thus will be inherently anthropomorphic
That “recursive self-improvement” was mostly relevant only in the larger systemic sense (civilization level)
LLMs are far more anthropomorphic (brain-like) than the fast clean consequential reasoners EY expected:
close correspondence to linguistic cortex (internal computations and training objective)
complete with human-like cognitive biases!
unexpected human-like limitations: struggle with simple tasks like arithmetic, longer term planning, etc
AGI misalignment insights from jungian psychology more effective/useful/popular than MIRI’s core research
All of this was predicted from the systems/cybernetic framework/rubric that human minds are software constructs, brains are efficient and tractable, and thus AGI is mostly about reverse engineering the brain and then downloading/distilling human mindware into the new digital substrate.
I don’t know if the graph settles the question—is Moravec predicting AGI at “Human equivalence in a supercomputer” or “Human equivalence in a personal computer”? Hard to say from the graph.
The fact that he specifically talks about “compute power available to most researchers” makes it more clear what his predictions are. Taken literally that view would suggest something like: a trillion dollar computing budget spread across 10k researchers in 2010 would result in AGI in not-too-long, which looks a bit less plausible as a prediction but not out of the question.