Hubert Dreyfus, probably the most famous historical AI critic, published “Alchemy and Artificial Intelligence” in 1965, which argued that the techniques popular at the time were insufficient for AGI.
That is not at all what the summary says. Here is roughly the same text from the abstract:
Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer. Attempts to simulate cognitive processes on computers have, however, run into greater difficulties than anticipated. An examination of these difficulties reveals that the attempt to analyze intelligent behavior in digital computer language systematically excludes three fundamental human forms of information processing (fringe consciousness, essence/accident discrimination, and ambiguity tolerance). Moreover, there are four distinct types of intelligent activity, only two of which do not presuppose these human forms of information processing and can therefore be programmed. Significant developments in artificial intelligence in the remaining two areas must await computers of an entirely different sort, of which the only existing prototype is the little-understood human brain.
In case you thought he just meant greater speed, he says the opposite on PDF page 71. Here is roughly the same text again from a work I can actually copy and paste:
It no longer seems obvious that one can introduce search heuristics which enable the speed and accuracy of computers to bludgeon through in those areas where human beings use more elegant techniques. Lacking any a priori basis for confidence, we can only turn to the empirical results obtained thus far. That brute force can succeed to some extent is demonstrated by the early work in the field. The present difficulties in game playing, language translation, problem solving, and pattern recognition, however, indicate a limit to our ability to substitute one kind of “information processing** for another. Only experimentation can determine the extent to which newer and faster machines, better programming languages, and cleverer heuristics can continue to push back the frontier. Nonetheless, the dra- matic slowdown in the fields we have considered and the general failure to fulfill earlier predictions suggest the boundary may be near. Without the four assumptions to fall back on, current stagnation should be grounds for pessimism.
This, of course, has profound implications for our philosophical tradi- tion. If the persistent difficulties which have plagued all areas of artificial intelligence are reinterpreted as failures, these failures must be interpre- ted as empirical evidence against the psychological, epistemological, and ontological assumptions. In Heideggerian terms this is to say that if Western Metaphysics reaches its culmination in Cybernetics, the recent difficulties in artificial intelligence, rather than reflecting technological limitations, may reveal the limitations of technology.
If indeed Dreyfus meant to critique 1965′s algorithms—which is not what I’m seeing, and certainly not what I quoted—it would be surprising for him to get so much wrong. How did this occur?
If indeed Dreyfus meant to critique 1965′s algorithms—which is not what I’m seeing, and certainly not what I quoted
It seems to me like that’s pretty much what those quotes say—that there wasn’t, at that time, algorithmic progress sufficient to produce anything like human intelligence.
Again, he plainly says more than that. He’s challenging “the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer.” He asserts as fact that certain types of cognition require hardware more like a human brain. Only two out of four areas, he claims, “can therefore be programmed.” In case that’s not clear enough, here’s another quote of his:
since Area IV is just that area of intelligent behavior in which the attempt to program digital computers to exhibit fully formed adult intelligence must fail, the unavoidable recourse in Area III to heuristics which presuppose the abilities of Area IV is bound, sooner or later, to run into difficulties. Just how far heuristic programming can go in Area III before it runs up against the need for fringe consciousness, ambiguity tolerance, essential/inessential discrimination, and so forth, is an empirical question. However, we have seen ample evidence of trouble in the failure to produce a chess champion, to prove any interesting theorems, to translate languages, and in the abandonment of GPS.
He does not say that better algorithms are needed for Area IV, but that digital computers must fail. He goes on to falsely predict that clever search together with “newer and faster machines” cannot produce a chess champion. AFAICT this is false even if we try to interpret him charitably, as saying more human-like reasoning would be needed.
The doc Jessicata linked has page numbers but no embedded text. Can you give a page number for that one?
Unlike your other quotes, it at least seems to say what you’re saying it says. But it appears to start mid-sentence, and in any case I’d like to read it in context.
Assuming you mean the last blockquote, that would be the Google result I mentioned which has text, so you can go there, press Ctrl-F, and type “must fail” or similar.
You can also read the beginning of the PDF, which talks about what can and can’t be programmed while making clear this is about hardware and not algorithms. See the first comment in this family for context.
That is not at all what the summary says. Here is roughly the same text from the abstract:
In case you thought he just meant greater speed, he says the opposite on PDF page 71. Here is roughly the same text again from a work I can actually copy and paste:
If indeed Dreyfus meant to critique 1965′s algorithms—which is not what I’m seeing, and certainly not what I quoted—it would be surprising for him to get so much wrong. How did this occur?
It seems to me like that’s pretty much what those quotes say—that there wasn’t, at that time, algorithmic progress sufficient to produce anything like human intelligence.
Again, he plainly says more than that. He’s challenging “the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer.” He asserts as fact that certain types of cognition require hardware more like a human brain. Only two out of four areas, he claims, “can therefore be programmed.” In case that’s not clear enough, here’s another quote of his:
He does not say that better algorithms are needed for Area IV, but that digital computers must fail. He goes on to falsely predict that clever search together with “newer and faster machines” cannot produce a chess champion. AFAICT this is false even if we try to interpret him charitably, as saying more human-like reasoning would be needed.
The doc Jessicata linked has page numbers but no embedded text. Can you give a page number for that one?
Unlike your other quotes, it at least seems to say what you’re saying it says. But it appears to start mid-sentence, and in any case I’d like to read it in context.
Assuming you mean the last blockquote, that would be the Google result I mentioned which has text, so you can go there, press Ctrl-F, and type “must fail” or similar.
You can also read the beginning of the PDF, which talks about what can and can’t be programmed while making clear this is about hardware and not algorithms. See the first comment in this family for context.
And also that the general methodology/assumptions/paradigm of the time was incapable of handling important parts of intelligence.