Given this huge difference in scale, they can’t have merely assumed the brain wastes most of its capacity; they would have had to assume the brain wastes all but a negligibly tiny fraction of its capacity. Do you know what they thought? Did they explicitly discuss this question?
I think you’re making a jump here, or at least making an insinuation about ‘waste’ which is unjustified: Moravec’s estimate in 1997 is not a good indicator of how much computing power they estimated the human brain equated to back in the 1950s: an estimate in 1997, besides benefiting from decades and countless breakthroughs in neuroscience, is one made with the benefit of hindsight and Moravec’s own paradox commenting on the Dartmouth-style failure. Given the general reason for the optimism—the striking success of the small early efforts in logic, chess-playing etc—I would personally expect them to have assigned very small estimates of the brain’s computing power since that’s what they had observed so far. A case in point: IIRC, Alan Turing seems to have been an extreme pessimist in that he put off the development of human-level AI to ~2000 - because that’s when he calculated gigabyte-sized memories would become available!
They probably just thought it takes that much brain matter to calculate the answer non-digitally, and brains didn’t have a choice in substrate or approach: it was neurons or nothing.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”
I think you’re making a jump here, or at least making an insinuation about ‘waste’ which is unjustified: Moravec’s estimate in 1997 is not a good indicator of how much computing power they estimated the human brain equated to back in the 1950s: an estimate in 1997, besides benefiting from decades and countless breakthroughs in neuroscience, is one made with the benefit of hindsight and Moravec’s own paradox commenting on the Dartmouth-style failure. Given the general reason for the optimism—the striking success of the small early efforts in logic, chess-playing etc—I would personally expect them to have assigned very small estimates of the brain’s computing power since that’s what they had observed so far. A case in point: IIRC, Alan Turing seems to have been an extreme pessimist in that he put off the development of human-level AI to ~2000 - because that’s when he calculated gigabyte-sized memories would become available!
They probably just thought it takes that much brain matter to calculate the answer non-digitally, and brains didn’t have a choice in substrate or approach: it was neurons or nothing.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”