So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”