Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled “The errors, insights and lessons of famous AI predictions and what they mean for the future” to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we’ll present it here after the fact.
The prediction classification shemas can be found in the first case study.
Dreyfus’s Artificial Alchemy
Classification: issues and metastatements, using the outside view, non-expert judgement and philosophical arguments.
Hubert Dreyfus was a prominent early critic of Artificial Intelligence. He published a series of papers and books attacking the claims and assumptions of the AI field, starting in 1965 with a paper for the Rand corporation entitled ‘Alchemy and AI’ (Dre65). The paper was famously combative, analogising AI research to alchemy and ridiculing AI claims. Later, D. Crevier would claim ″time has proven the accuracy and perceptiveness of some of Dreyfus’s comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier″ (Cre93). Ignoring the formulation issues, were Dreyfus’s criticisms actually correct, and what can be learned from them?
Was Dreyfus an expert? Though a reasonably prominent philosopher, there is nothing in his background to suggest specific expertise with theories of minds and consciousness, and absolutely nothing to suggest familiarity with artificial intelligence and the problems of the field. Thus Dreyfus cannot be considered anything more that an intelligent outsider.
This makes the pertinence and accuracy of his criticisms that much more impressive. Dreyfus highlighted several over-optimistic claims for the power of AI, predicting—correctly—that the 1965 optimism would also fade (with, for instance, decent chess computers still a long way off). He used the outside view to claim this as a near universal pattern in AI: initial successes, followed by lofty claims, followed by unexpected difficulties and subsequent disappointment. He highlighted the inherent ambiguity in human language and syntax, and claimed that computers could not deal with these. He noted the importance of unconscious processes in recognising objects, the importance of context and the fact that humans and computers operated in very different ways. He also criticised the use of computational paradigms for analysing human behaviour, and claimed that philosophical ideas in linguistics and classification were relevant to AI research. In all, his paper is full of interesting ideas and intelligent deconstructions of how humans and machines operate.
All these are astoundingly prescient predictions for 1965, when computers were in their infancy and their limitations were only beginning to be understood. Moreover he was not only often right, but right for the right reasons (see for instance his understanding of the difficulties computer would have in dealing with ambiguity). Not everything Dreyfus wrote was correct, however; apart from minor specific points (such as his distrust of heuristics), he erred most mostly by pushing his predictions to extremes. He claimed that ‘the boundary may be near’ in computer abilities, and concluded with:
″… what can now be done? Nothing directly towards building machines which can be intelligent. [...] in the long run [we must think] of non-digital automata...″
Currently, however, there exists ‘digital automata’ that can beat all humans at chess, translate most passages to at least an understandable level, and beat humans at ‘Jeopardy’, a linguistically ambiguous arena (Gui11). He also failed to foresee that workers in AI would eventually develop new methods to overcome the problems he had outlined. Though Dreyfus would later state that he never claimed AI achievements were impossible (McC04), there is no reason to pay attention to later re-interpretations: Dreyfus’s 1965 article strongly suggests that AI progress was bounded. These failures are an illustration of the principle that even the best of predictors is vulnerable to overconfidence.
In 1965, people would have been justified to find Dreyfus’s analysis somewhat implausible. It was the work of an outsider with no specific relevant expertise, and dogmatically contradicted the opinion of genuine experts inside the AI field. Though the claims it made about human and machine cognition seemed plausible, there is a great difference between seeming plausible and actually being correct, and his own non-expert judgement was the main backing for the claims. Outside of logic, philosophy had yet to contribute much to the field of AI, so no intrinsic reason to listen to a philosopher. There were, however, a few signs that the paper was of high quality: Dreyfus seemed to be very knowledgeable about progress and work in AI, and most of his analyses on human cognition were falsifiable, at least to some extent. These were still not strong arguments to heed the skeptical opinions of an outsider.
The subsequent partial vindication of the paper is therefore a stark warning: it is very difficult to estimate the accuracy of outsider predictions. There were many reasons to reject Dreyfus’s predictions in 1965, and yet that would have been the wrong thing to do. Blindly accepting non-expert outsider predictions would have also been a mistake, however: these are most often in error. One general lesson concerns the need to decrease certainty: the computer scientists of 1965 should at least have accepted the possibility (if not the plausibility) that some of Dreyfus’s analysis was correct, and they should have started paying more attention to the ‘success-excitement-difficulties-stalling’ cycles in their field to see if the pattern continued. A second lesson could be about the importance of philosophy: it does seem that philosophers’ meta-analytical skills can contribute useful ideas to AI—a fact that is certainly not self-evident.
References:
[Arm] Stuart Armstrong. General purpose intelligence: arguing the orthogonality thesis. In preparation.
[ASB12] Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299-324, 2012.
[BBJ+03] S. Bleich, B. Bandelow, K. Javaheripour, A. Muller, D. Degner, J. Wilhelm, U. Havemann-Reinecke, W. Sperling, E. Ruther, and J. Kornhuber. Hyperhomocysteinemia as a new risk factor for brain shrinkage in patients with alcoholism. Neuroscience Letters, 335:179-182, 2003.
[Bos13] Nick Bostrom. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. forthcoming in Minds and Machines, 2013.
[Cre93] Daniel Crevier. AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, New York, 1993.
[Den91] Daniel Dennett. Consciousness Explained. Little, Brown and Co., 1991.
[Deu12] D. Deutsch. The very laws of physics imply that artificial intelligence must be possible. what’s holding us up? Aeon, 2012.
[Dre65] Hubert Dreyfus. Alchemy and ai. RAND Corporation, 1965.
[eli66] Eliza-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9:36-45, 1966.
[Fis75] Baruch Fischho. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1:288-299, 1975.
[Gui11] Erico Guizzo. IBM’s Watson jeopardy computer shuts down humans in final game. IEEE Spectrum, 17, 2011.
[Hal11] J. Hall. Further reflections on the timescale of ai. In Solomonoff 85th Memorial Conference, 2011.
[Han94] R. Hanson. What if uploads come first: The crack of a future dawn. Extropy, 6(2), 1994.
[Har01] S. Harnad. What’s wrong and right about Searle’s Chinese room argument? In M. Bishop and J. Preston, editors, Essays on Searle’s Chinese Room Argument. Oxford University Press, 2001.
[Hau85] John Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cambridge, Mass., 1985.
[Hof62] Richard Hofstadter. Anti-intellectualism in American Life. 1962.
[Kah11] D. Kahneman. Thinking, Fast and Slow. Farra, Straus and Giroux, 2011.
[KL93] Daniel Kahneman and Dan Lovallo. Timid choices and bold forecasts: A cognitive perspective on risk taking. Management science, 39:17-31, 1993.
[Kur99] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult, 1999.
[McC79] J. McCarthy. Ascribing mental qualities to machines. In M. Ringle, editor, Philosophical Perspectives in Artificial Intelligence. Harvester Press, 1979.
[McC04] Pamela McCorduck. Machines Who Think. A. K. Peters, Ltd., Natick, MA, 2004.
[Win71] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. MIT AI Technical Report, 235, 1971.
[Yam12] Roman V. Yampolskiy. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19:194-214, 2012.
[Yud08] Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. In Nick Bostrom and Milan M. Ćirković, editors, Global catastrophic risks, pages 308-345, New York, 2008. Oxford University Press.
AI prediction case study 2: Dreyfus’s Artificial Alchemy
Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled “The errors, insights and lessons of famous AI predictions and what they mean for the future” to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we’ll present it here after the fact.
The prediction classification shemas can be found in the first case study.
Dreyfus’s Artificial Alchemy
Classification: issues and metastatements, using the outside view, non-expert judgement and philosophical arguments.
Hubert Dreyfus was a prominent early critic of Artificial Intelligence. He published a series of papers and books attacking the claims and assumptions of the AI field, starting in 1965 with a paper for the Rand corporation entitled ‘Alchemy and AI’ (Dre65). The paper was famously combative, analogising AI research to alchemy and ridiculing AI claims. Later, D. Crevier would claim ″time has proven the accuracy and perceptiveness of some of Dreyfus’s comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier″ (Cre93). Ignoring the formulation issues, were Dreyfus’s criticisms actually correct, and what can be learned from them?
Was Dreyfus an expert? Though a reasonably prominent philosopher, there is nothing in his background to suggest specific expertise with theories of minds and consciousness, and absolutely nothing to suggest familiarity with artificial intelligence and the problems of the field. Thus Dreyfus cannot be considered anything more that an intelligent outsider.
This makes the pertinence and accuracy of his criticisms that much more impressive. Dreyfus highlighted several over-optimistic claims for the power of AI, predicting—correctly—that the 1965 optimism would also fade (with, for instance, decent chess computers still a long way off). He used the outside view to claim this as a near universal pattern in AI: initial successes, followed by lofty claims, followed by unexpected difficulties and subsequent disappointment. He highlighted the inherent ambiguity in human language and syntax, and claimed that computers could not deal with these. He noted the importance of unconscious processes in recognising objects, the importance of context and the fact that humans and computers operated in very different ways. He also criticised the use of computational paradigms for analysing human behaviour, and claimed that philosophical ideas in linguistics and classification were relevant to AI research. In all, his paper is full of interesting ideas and intelligent deconstructions of how humans and machines operate.
All these are astoundingly prescient predictions for 1965, when computers were in their infancy and their limitations were only beginning to be understood. Moreover he was not only often right, but right for the right reasons (see for instance his understanding of the difficulties computer would have in dealing with ambiguity). Not everything Dreyfus wrote was correct, however; apart from minor specific points (such as his distrust of heuristics), he erred most mostly by pushing his predictions to extremes. He claimed that ‘the boundary may be near’ in computer abilities, and concluded with:
Currently, however, there exists ‘digital automata’ that can beat all humans at chess, translate most passages to at least an understandable level, and beat humans at ‘Jeopardy’, a linguistically ambiguous arena (Gui11). He also failed to foresee that workers in AI would eventually develop new methods to overcome the problems he had outlined. Though Dreyfus would later state that he never claimed AI achievements were impossible (McC04), there is no reason to pay attention to later re-interpretations: Dreyfus’s 1965 article strongly suggests that AI progress was bounded. These failures are an illustration of the principle that even the best of predictors is vulnerable to overconfidence.
In 1965, people would have been justified to find Dreyfus’s analysis somewhat implausible. It was the work of an outsider with no specific relevant expertise, and dogmatically contradicted the opinion of genuine experts inside the AI field. Though the claims it made about human and machine cognition seemed plausible, there is a great difference between seeming plausible and actually being correct, and his own non-expert judgement was the main backing for the claims. Outside of logic, philosophy had yet to contribute much to the field of AI, so no intrinsic reason to listen to a philosopher. There were, however, a few signs that the paper was of high quality: Dreyfus seemed to be very knowledgeable about progress and work in AI, and most of his analyses on human cognition were falsifiable, at least to some extent. These were still not strong arguments to heed the skeptical opinions of an outsider.
The subsequent partial vindication of the paper is therefore a stark warning: it is very difficult to estimate the accuracy of outsider predictions. There were many reasons to reject Dreyfus’s predictions in 1965, and yet that would have been the wrong thing to do. Blindly accepting non-expert outsider predictions would have also been a mistake, however: these are most often in error. One general lesson concerns the need to decrease certainty: the computer scientists of 1965 should at least have accepted the possibility (if not the plausibility) that some of Dreyfus’s analysis was correct, and they should have started paying more attention to the ‘success-excitement-difficulties-stalling’ cycles in their field to see if the pattern continued. A second lesson could be about the importance of philosophy: it does seem that philosophers’ meta-analytical skills can contribute useful ideas to AI—a fact that is certainly not self-evident.
References:
[Arm] Stuart Armstrong. General purpose intelligence: arguing the orthogonality thesis. In preparation.
[ASB12] Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299-324, 2012.
[BBJ+03] S. Bleich, B. Bandelow, K. Javaheripour, A. Muller, D. Degner, J. Wilhelm, U. Havemann-Reinecke, W. Sperling, E. Ruther, and J. Kornhuber. Hyperhomocysteinemia as a new risk factor for brain shrinkage in patients with alcoholism. Neuroscience Letters, 335:179-182, 2003.
[Bos13] Nick Bostrom. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. forthcoming in Minds and Machines, 2013.
[Cre93] Daniel Crevier. AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, New York, 1993.
[Den91] Daniel Dennett. Consciousness Explained. Little, Brown and Co., 1991.
[Deu12] D. Deutsch. The very laws of physics imply that artificial intelligence must be possible. what’s holding us up? Aeon, 2012.
[Dre65] Hubert Dreyfus. Alchemy and ai. RAND Corporation, 1965.
[eli66] Eliza-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9:36-45, 1966.
[Fis75] Baruch Fischho. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1:288-299, 1975.
[Gui11] Erico Guizzo. IBM’s Watson jeopardy computer shuts down humans in final game. IEEE Spectrum, 17, 2011.
[Hal11] J. Hall. Further reflections on the timescale of ai. In Solomonoff 85th Memorial Conference, 2011.
[Han94] R. Hanson. What if uploads come first: The crack of a future dawn. Extropy, 6(2), 1994.
[Har01] S. Harnad. What’s wrong and right about Searle’s Chinese room argument? In M. Bishop and J. Preston, editors, Essays on Searle’s Chinese Room Argument. Oxford University Press, 2001.
[Hau85] John Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cambridge, Mass., 1985.
[Hof62] Richard Hofstadter. Anti-intellectualism in American Life. 1962.
[Kah11] D. Kahneman. Thinking, Fast and Slow. Farra, Straus and Giroux, 2011.
[KL93] Daniel Kahneman and Dan Lovallo. Timid choices and bold forecasts: A cognitive perspective on risk taking. Management science, 39:17-31, 1993.
[Kur99] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult, 1999.
[McC79] J. McCarthy. Ascribing mental qualities to machines. In M. Ringle, editor, Philosophical Perspectives in Artificial Intelligence. Harvester Press, 1979.
[McC04] Pamela McCorduck. Machines Who Think. A. K. Peters, Ltd., Natick, MA, 2004.
[Min84] Marvin Minsky. Afterword to Vernor Vinges novel, “True names.” Unpublished manuscript. 1984.
[Moo65] G. Moore. Cramming more components onto integrated circuits. Electronics, 38(8), 1965.
[Omo08] Stephen M. Omohundro. The basic ai drives. Frontiers in Artificial Intelligence and applications, 171:483-492, 2008.
[Pop] Karl Popper. The Logic of Scientific Discovery. Mohr Siebeck.
[Rey86] G. Rey. What’s really going on in Searle’s Chinese room”. Philosophical Studies, 50:169-185, 1986.
[Riv12] William Halse Rivers. The disappearance of useful arts. Helsingfors, 1912.
[San08] A. Sandberg. Whole brain emulations: a roadmap. Future of Humanity Institute Technical Report, 2008-3, 2008.
[Sea80] J. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3(3):417-457, 1980.
[Sea90] John Searle. Is the brain’s mind a computer program? Scientific American, 262:26-31, 1990.
[Sim55] H.A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69:99-118, 1955.
[Tur50] A. Turing. Computing machinery and intelligence. Mind, 59:433-460, 1950.
[vNM44] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ, Princeton University Press, 1944.
[Wal05] Chip Walter. Kryder’s law. Scientific American, 293:32-33, 2005.
[Win71] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. MIT AI Technical Report, 235, 1971.
[Yam12] Roman V. Yampolskiy. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19:194-214, 2012.
[Yud08] Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. In Nick Bostrom and Milan M. Ćirković, editors, Global catastrophic risks, pages 308-345, New York, 2008. Oxford University Press.