It now seems clear to me that EY was not bullish on neural networks leading to impressive AI capabilities. Eliezer said this directly:
I’m no fan of neurons; this may be clearer from other posts.[1]
I think this is strong evidence for my interpretation of the quotes in my parent comment: He’s not just mocking the local invalidity of reasoning “because humans have lots of neurons, AI with lots of neurons → smart”, he’s also mocking neural network-driven hopes themselves.
Not to mention that neural networks have also been “failing” (i.e., not yet succeeding) to produce real AI for 30 years now. I don’t think this particular raw fact licenses any conclusions in particular. But at least don’t tell me it’s still the new revolutionary idea in AI.
This is the original example I used when I talked about the “Outside the Box” box—people think of “amazing new AI idea” and return their first cache hit, which is “neural networks” due to a successful marketing campaign thirty goddamned years ago. I mean, not every old idea is bad—but to still be marketing it as the new defiant revolution? Give me a break.
In this passage, he employs well-scoped and well-hedged language via “this particular raw fact.” I like this writing because it points out an observation, and then what inferences (if any) he draws from that observation. Overall, his tone is negative on neural networks.
Let’s open up that “Outside the Box” box:
In Artificial Intelligence, everyone outside the field has a cached result for brilliant new revolutionary AI idea—neural networks, which work just like the human brain! New AI Idea: complete the pattern: “Logical AIs, despite all the big promises, have failed to provide real intelligence for decades—what we need are neural networks!”
This cached thought has been around for three decades. Still no general intelligence. But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s. Talk about your aging hippies.
It now seems clear to me that EY was not bullish on neural networks leading to impressive AI capabilities. Eliezer said this directly:
I think this is strong evidence for my interpretation of the quotes in my parent comment: He’s not just mocking the local invalidity of reasoning “because humans have lots of neurons, AI with lots of neurons → smart”, he’s also mocking neural network-driven hopes themselves.
More quotes from Logical or Connectionist AI?:
In this passage, he employs well-scoped and well-hedged language via “this particular raw fact.” I like this writing because it points out an observation, and then what inferences (if any) he draws from that observation. Overall, his tone is negative on neural networks.
Let’s open up that “Outside the Box” box:
This is more incorrect mockery.