So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”