the computers at the time were much simpler than the human brain (e.g. the IBM 701, with 73728 bits of memory), so any assumption that AIs could be built was also an assumption that most of the human brain’s processing was wasted.
This isn’t emphasized enough. This difference is many orders of magnitude large. As the first example that springs to mind, Moravec estimated in 1997 a capacity of 100 million MIPS for simulating human behavior (that is, 1e14 instructions per second). (A brain simulation on a biological or physical level would probably take much more.) Wikipedia lists the IPS ratings of many chips; a top of the line Intel CPU from 2011 achieves 177 thousand MIPS = 1.77e11.
The IBM 704 computer released in 1954 achieved 4000 instructions per second. (That’s 4e3, no millions involved.) The IBM 701 mentioned in the post was slower (WIkipedia doesn’t specify how much). Furthermore, there was no reason in 1956 to anticipate the high speed of Moore’s Law.
Given this huge difference in scale, they can’t have merely assumed the brain wastes most of its capacity; they would have had to assume the brain wastes all but a negligibly tiny fraction of its capacity. Do you know what they thought? Did they explicitly discuss this question?
Biology was pre-genetics, but they would have known that the brain must be doing something useful. A big brain is metabolically expensive. A big head makes childbirth dangerous. Humans have recently evolved much bigger brains than any other primate (even without correcting for body size), at the same time as they evolved intelligence and culture.
My guesses as to what they may have thought:
Most of the brain’s functions are things not immediately needed for AI (but then how to explain huge brains that are unique to humans?)
Big brains have a different evolutionary explanation, like sexual selection, and intelligence is an accidental byproduct. But that would need strong evidence.
Brains do things in extremely inefficient ways, because some designs simply can’t evolve. Humans can design more efficient solutions. But then, we probably can’t learn anything from the complex, inefficient brains in order to find those solutions. So why believe 1956-era computers were adequate to AI research?
Maybe they assumed that each macroscopic region of the brain was essentially made of few simple neural circuits replicated over and over again to provide signal strength, much like a muscle is made of a few types of muscle fibers replicated over and over again. Just like you don’t need hundreds billions hydraulic cylinders to replicate the functionality of a muscle, they may have thought that you didn’t need hundreds billions processing components to replicate the functionality of the brain.
Was this a reasonable hypothesis? I don’t know if a neuroscientist of the time would have agreed, but it seems to me that it may not have been too far fetched for the Dartmouth Conference people. I suppose that with the observation techniques of the time, the brain looked quite homogeneous below the level of macroscopic regions. The Dartmouth also lacked the theoretical insight about complex pattern of connectivity. Moreover, computers of the time equaled or vastly surpassed humans at many tasks that were previously thought to require great intelligence, such as numerical computation.
Given this huge difference in scale, they can’t have merely assumed the brain wastes most of its capacity; they would have had to assume the brain wastes all but a negligibly tiny fraction of its capacity. Do you know what they thought? Did they explicitly discuss this question?
I think you’re making a jump here, or at least making an insinuation about ‘waste’ which is unjustified: Moravec’s estimate in 1997 is not a good indicator of how much computing power they estimated the human brain equated to back in the 1950s: an estimate in 1997, besides benefiting from decades and countless breakthroughs in neuroscience, is one made with the benefit of hindsight and Moravec’s own paradox commenting on the Dartmouth-style failure. Given the general reason for the optimism—the striking success of the small early efforts in logic, chess-playing etc—I would personally expect them to have assigned very small estimates of the brain’s computing power since that’s what they had observed so far. A case in point: IIRC, Alan Turing seems to have been an extreme pessimist in that he put off the development of human-level AI to ~2000 - because that’s when he calculated gigabyte-sized memories would become available!
They probably just thought it takes that much brain matter to calculate the answer non-digitally, and brains didn’t have a choice in substrate or approach: it was neurons or nothing.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”
This isn’t emphasized enough. This difference is many orders of magnitude large. As the first example that springs to mind, Moravec estimated in 1997 a capacity of 100 million MIPS for simulating human behavior (that is, 1e14 instructions per second). (A brain simulation on a biological or physical level would probably take much more.) Wikipedia lists the IPS ratings of many chips; a top of the line Intel CPU from 2011 achieves 177 thousand MIPS = 1.77e11.
The IBM 704 computer released in 1954 achieved 4000 instructions per second. (That’s 4e3, no millions involved.) The IBM 701 mentioned in the post was slower (WIkipedia doesn’t specify how much). Furthermore, there was no reason in 1956 to anticipate the high speed of Moore’s Law.
Given this huge difference in scale, they can’t have merely assumed the brain wastes most of its capacity; they would have had to assume the brain wastes all but a negligibly tiny fraction of its capacity. Do you know what they thought? Did they explicitly discuss this question?
Biology was pre-genetics, but they would have known that the brain must be doing something useful. A big brain is metabolically expensive. A big head makes childbirth dangerous. Humans have recently evolved much bigger brains than any other primate (even without correcting for body size), at the same time as they evolved intelligence and culture.
My guesses as to what they may have thought:
Most of the brain’s functions are things not immediately needed for AI (but then how to explain huge brains that are unique to humans?)
Big brains have a different evolutionary explanation, like sexual selection, and intelligence is an accidental byproduct. But that would need strong evidence.
Brains do things in extremely inefficient ways, because some designs simply can’t evolve. Humans can design more efficient solutions. But then, we probably can’t learn anything from the complex, inefficient brains in order to find those solutions. So why believe 1956-era computers were adequate to AI research?
Maybe they assumed that each macroscopic region of the brain was essentially made of few simple neural circuits replicated over and over again to provide signal strength, much like a muscle is made of a few types of muscle fibers replicated over and over again.
Just like you don’t need hundreds billions hydraulic cylinders to replicate the functionality of a muscle, they may have thought that you didn’t need hundreds billions processing components to replicate the functionality of the brain.
Was this a reasonable hypothesis? I don’t know if a neuroscientist of the time would have agreed, but it seems to me that it may not have been too far fetched for the Dartmouth Conference people.
I suppose that with the observation techniques of the time, the brain looked quite homogeneous below the level of macroscopic regions. The Dartmouth also lacked the theoretical insight about complex pattern of connectivity.
Moreover, computers of the time equaled or vastly surpassed humans at many tasks that were previously thought to require great intelligence, such as numerical computation.
I think you’re making a jump here, or at least making an insinuation about ‘waste’ which is unjustified: Moravec’s estimate in 1997 is not a good indicator of how much computing power they estimated the human brain equated to back in the 1950s: an estimate in 1997, besides benefiting from decades and countless breakthroughs in neuroscience, is one made with the benefit of hindsight and Moravec’s own paradox commenting on the Dartmouth-style failure. Given the general reason for the optimism—the striking success of the small early efforts in logic, chess-playing etc—I would personally expect them to have assigned very small estimates of the brain’s computing power since that’s what they had observed so far. A case in point: IIRC, Alan Turing seems to have been an extreme pessimist in that he put off the development of human-level AI to ~2000 - because that’s when he calculated gigabyte-sized memories would become available!
They probably just thought it takes that much brain matter to calculate the answer non-digitally, and brains didn’t have a choice in substrate or approach: it was neurons or nothing.
So you’re saying they didn’t expect to learn from how human brain did things, or to emulate it. They just thought human-equivalent AI was inherently a simple problem, both algorithmically and in terms of the digital processing power needed.
I wonder if part of the reason they thought this was that not only AI was a very young field, but so was all of modern computer science. It had been developing very quickly because everyone had been working on very low-hanging fruit that hadn’t been interesting even fifteen years before, because no computers had existed before then.
So all their salient examples were of quick progress in new fields. When confronted with a completely new research field, they assigned much higher priors to making rapid progress than we would in 2013, regardless of what that field was. That seems reasonable—AI happened to be one of the least tractactable new fields, but they couldn’t know that in advance. Looking back and saying they got their predictions amazingly wrong demonstrates some hindsight/selection bias.
That might be it. Recall Djikstra on computers thinking and submarines swimming, and how very few of our technologies are biomimetic; note that the perceptron algorithm—never mind multilayer or backpropagation neural nets—dates to 1957.
But suppose they did have neural nets, and someone asked them—“hey guys, maybe we’re wrong that the brain is applying all these billions of neurons because neurons just aren’t very good and you need billions if you’re going to do anything remotely intelligent? If so, we’re being wildly overoptimistic, since a quick calculation says that if neurons are as complex and powerful as they could be, then we won’t get human-equivalent computers past, gosh, 2000 or worse! Let’s take our best chess-playing alpha-beta-pruning LISP-1 code and see how it does against a trained perceptron with a few hundred nodes.”
Now, I haven’t actually written a chess-playing program or perceptrons or compared them head-to-head, but I’m guessing the comparison would end with the GOFAI program crushing the perceptron in both chess skill and resources used up, possibly by orders of magnitude in both direction (since even now, with perceptrons long obsolete and all sorts of fancy new tools like deep learning networks, neural networks are still rarely used in chess-playing).
“So, that was a reasonable hypothesis, sure, but it looks like it just doesn’t pan out: we put the ‘powerful neurons’ to the test and they failed. And it’s early days yet! We’ve barely scratched the surface of computer chess!”