I think Eliezer does disagree. I find his disagreement fairly annoying. He calls biological anchors the “trick that never works” and gives an initial example of Moravec predicting AGI in 2010 in the book Mind Children.
But as far as I can tell so far that’s just Eliezer putting words in Moravec’s mouth. Moravec doesn’t make very precise predictions in the book, but the heading of the relevant section is “human equivalence in 40 years” (i.e. 2028, the book was written in 1988). Eliezer thinks that Moravec ought to think that human-level AI and shortly thereafter a singularity will occur at the time when a giant cluster is as big as a brain, which Moravec puts in 2010. But I don’t see any evidence that Moravec agreed with that implication, and the book seems to generally talk about a timeframe like 2030-2040. Eliezer repeated this claim in our conversation but still didn’t really provide any indication Moravec held this view.
To the extent that people were imagining neural networks, I don’t think they would expect trained neural networks to be the size of a computing cluster. It’s not not the straightforward extrapolation from the kinds of neural networks people were actually computing, so someone going on vibes wouldn’t make that forecast. And if you try to actually pencil out the training cost it’s clear it won’t work, since you have to run a neural network a huge number of times during training, so someone trying to think things through on paper wouldn’t think that either. At least since the 1990 I’ve seen a lot of people making predictions along these lines, but as far as I can tell they seem to give actual predictions in the 2020s or 2030s which currently look quite good to me relative to every other forecasting methodology.
This graph nicely summarizes his timeline from Mind Children in 1988. The book itself presents his view that AI progress is primarily constrained by compute power available to most researchers, which is usually around that of a PC.
Moravec et al were correct in multiple key disagreements with EY et al:
That progress was smooth and predictable from Moore’s Law (similar to how the arrival of flight is postdictable from ICE progress)
That AGI would be based on brain-reverse engineering, and thus will be inherently anthropomorphic
That “recursive self-improvement” was mostly relevant only in the larger systemic sense (civilization level)
LLMs are far more anthropomorphic (brain-like) than the fast clean consequential reasoners EY expected:
close correspondence to linguistic cortex (internal computations and training objective)
complete with human-like cognitive biases!
unexpected human-like limitations: struggle with simple tasks like arithmetic, longer term planning, etc
AGI misalignment insights from jungian psychology more effective/useful/popular than MIRI’s core research
All of this was predicted from the systems/cybernetic framework/rubric that human minds are software constructs, brains are efficient and tractable, and thus AGI is mostly about reverse engineering the brain and then downloading/distilling human mindware into the new digital substrate.
I don’t know if the graph settles the question—is Moravec predicting AGI at “Human equivalence in a supercomputer” or “Human equivalence in a personal computer”? Hard to say from the graph.
The fact that he specifically talks about “compute power available to most researchers” makes it more clear what his predictions are. Taken literally that view would suggest something like: a trillion dollar computing budget spread across 10k researchers in 2010 would result in AGI in not-too-long, which looks a bit less plausible as a prediction but not out of the question.
I can’t believe that post is sitting at 185 karma considering how it opens with a complete blatant misquote/lie about moravec’s central prediction, and only gets worse from there.
Moravec predicted—in mind children in 1988! - AGI in 2028, based on moore’s law and the brain reverse engineering assumption. He was prescient—a true prophet/futurist. EY was wrong and his attempt to smear Moravec here is simply embarrassing.
Even with some disagreements writ how powerful AI can be, I definitely agreee that Eliezer is pretty bad epistemically speaking on anything related to AI or alignment topics, and we should stop treating him as any kind of authority.
Agreed, tho I think Eliezer disagrees?
I think Eliezer does disagree. I find his disagreement fairly annoying. He calls biological anchors the “trick that never works” and gives an initial example of Moravec predicting AGI in 2010 in the book Mind Children.
But as far as I can tell so far that’s just Eliezer putting words in Moravec’s mouth. Moravec doesn’t make very precise predictions in the book, but the heading of the relevant section is “human equivalence in 40 years” (i.e. 2028, the book was written in 1988). Eliezer thinks that Moravec ought to think that human-level AI and shortly thereafter a singularity will occur at the time when a giant cluster is as big as a brain, which Moravec puts in 2010. But I don’t see any evidence that Moravec agreed with that implication, and the book seems to generally talk about a timeframe like 2030-2040. Eliezer repeated this claim in our conversation but still didn’t really provide any indication Moravec held this view.
To the extent that people were imagining neural networks, I don’t think they would expect trained neural networks to be the size of a computing cluster. It’s not not the straightforward extrapolation from the kinds of neural networks people were actually computing, so someone going on vibes wouldn’t make that forecast. And if you try to actually pencil out the training cost it’s clear it won’t work, since you have to run a neural network a huge number of times during training, so someone trying to think things through on paper wouldn’t think that either. At least since the 1990 I’ve seen a lot of people making predictions along these lines, but as far as I can tell they seem to give actual predictions in the 2020s or 2030s which currently look quite good to me relative to every other forecasting methodology.
This graph nicely summarizes his timeline from Mind Children in 1988. The book itself presents his view that AI progress is primarily constrained by compute power available to most researchers, which is usually around that of a PC.
Moravec et al were correct in multiple key disagreements with EY et al:
That progress was smooth and predictable from Moore’s Law (similar to how the arrival of flight is postdictable from ICE progress)
That AGI would be based on brain-reverse engineering, and thus will be inherently anthropomorphic
That “recursive self-improvement” was mostly relevant only in the larger systemic sense (civilization level)
LLMs are far more anthropomorphic (brain-like) than the fast clean consequential reasoners EY expected:
close correspondence to linguistic cortex (internal computations and training objective)
complete with human-like cognitive biases!
unexpected human-like limitations: struggle with simple tasks like arithmetic, longer term planning, etc
AGI misalignment insights from jungian psychology more effective/useful/popular than MIRI’s core research
All of this was predicted from the systems/cybernetic framework/rubric that human minds are software constructs, brains are efficient and tractable, and thus AGI is mostly about reverse engineering the brain and then downloading/distilling human mindware into the new digital substrate.
I don’t know if the graph settles the question—is Moravec predicting AGI at “Human equivalence in a supercomputer” or “Human equivalence in a personal computer”? Hard to say from the graph.
The fact that he specifically talks about “compute power available to most researchers” makes it more clear what his predictions are. Taken literally that view would suggest something like: a trillion dollar computing budget spread across 10k researchers in 2010 would result in AGI in not-too-long, which looks a bit less plausible as a prediction but not out of the question.
I can’t believe that post is sitting at 185 karma considering how it opens with a complete blatant misquote/lie about moravec’s central prediction, and only gets worse from there.
Moravec predicted—in mind children in 1988! - AGI in 2028, based on moore’s law and the brain reverse engineering assumption. He was prescient—a true prophet/futurist. EY was wrong and his attempt to smear Moravec here is simply embarrassing.
I’m reminded of this thread from 2022: https://www.lesswrong.com/posts/27EznPncmCtnpSojH/link-post-on-deference-and-yudkowsky-s-ai-risk-estimates?commentId=SLjkYtCfddvH9j38T#SLjkYtCfddvH9j38T
Even with some disagreements writ how powerful AI can be, I definitely agreee that Eliezer is pretty bad epistemically speaking on anything related to AI or alignment topics, and we should stop treating him as any kind of authority.