I can’t believe that post is sitting at 185 karma considering how it opens with a complete blatant misquote/lie about moravec’s central prediction, and only gets worse from there.
Moravec predicted—in mind children in 1988! - AGI in 2028, based on moore’s law and the brain reverse engineering assumption. He was prescient—a true prophet/futurist. EY was wrong and his attempt to smear Moravec here is simply embarrassing.
Even with some disagreements writ how powerful AI can be, I definitely agreee that Eliezer is pretty bad epistemically speaking on anything related to AI or alignment topics, and we should stop treating him as any kind of authority.
I can’t believe that post is sitting at 185 karma considering how it opens with a complete blatant misquote/lie about moravec’s central prediction, and only gets worse from there.
Moravec predicted—in mind children in 1988! - AGI in 2028, based on moore’s law and the brain reverse engineering assumption. He was prescient—a true prophet/futurist. EY was wrong and his attempt to smear Moravec here is simply embarrassing.
I’m reminded of this thread from 2022: https://www.lesswrong.com/posts/27EznPncmCtnpSojH/link-post-on-deference-and-yudkowsky-s-ai-risk-estimates?commentId=SLjkYtCfddvH9j38T#SLjkYtCfddvH9j38T
Even with some disagreements writ how powerful AI can be, I definitely agreee that Eliezer is pretty bad epistemically speaking on anything related to AI or alignment topics, and we should stop treating him as any kind of authority.