I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
This snippet is a possible introduction to the analysis article. Its purpose is to show readers that we aim to take seriously some common concerns about singularity thinking, to bring readers into Near Mode about the topic, and to explain the purpose and scope of the article.
Note that the target style is serious but still more chatty than a normal journal article.
_____
The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
Vernor Vinge
Humans may create human-level artificial intelligence in this century (Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011). Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010).
How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Yudkowsky 2001, 2008a; Russell and Norvig 2010, sec. 26.3); we will build on their thinking in our review of the subject.
Singularity Skepticism
Many are skeptical of Singularity arguments because they associate such arguments with detailed storytelling — the “if and then” fallacy of “speculative ethics” by which an improbable conditional becomes a supposed actual (Nordmann 2007). They are right to be skeptical: hundreds of studies show that humans are overconfident of their beliefs (Moore and Healy 2008), regularly overestimate the probability of detailed visualized scenarios (Tversky and Kahneman 2002), and tend to seek out only information that confirms their current views (Nickerson 1998). AI researchers are not immune from these errors, as evidenced by a history of over-optimistic predictions going back to the 1956 Dartmouth conference on AI (Dreyfus 1972).
Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future, even with little data. When Leo Szilard conceived of the nuclear chain reaction, he realized its destructive potential and filed his patent in a way that kept it secret from the Nazis (Rhodes 1995, 224–225). Svante Arrhenius’ (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997). Norman Rasmussen’s (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).
In planning for the future, how can we be more like Rasmussen and less like the Dartmouth conference? For a start, we can apply the recommendations of cognitive science on how to meliorate overconfidence and other biases (Larrick 2004; Lillienfeld, Ammirati, and Landfield 2009). In keeping with these recommendations, we acknowledge unknowns and do not build models that depend on detailed storytelling. For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress. To avoid nonsense, it should not be necessary to have superhuman reasoning powers; all that should be necessary is to avoid believing we know something when we do not.
One might think such caution would prevent us from concluding anything of interest, but in fact it seems that intelligence explosion may be a convergent outcome of many or most future scenarios. That is, an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.
First, we provide evidence which suggests that, barring global catastrophe and other disruptions to scientific progress, there is a significant probability we will see the creation of digital intelligence within a century. Second, we suggest that the arrival of digital intelligence is likely to lead rather quickly to intelligence explosion. Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.
These questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, can only be to provide a quick survey of the issues involved. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
Intelligence Explosion analysis draft: introduction
I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
This snippet is a possible introduction to the analysis article. Its purpose is to show readers that we aim to take seriously some common concerns about singularity thinking, to bring readers into Near Mode about the topic, and to explain the purpose and scope of the article.
Note that the target style is serious but still more chatty than a normal journal article.
_____
The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
Vernor Vinge
Humans may create human-level artificial intelligence in this century (Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011). Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010).
How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Yudkowsky 2001, 2008a; Russell and Norvig 2010, sec. 26.3); we will build on their thinking in our review of the subject.
Singularity Skepticism
Many are skeptical of Singularity arguments because they associate such arguments with detailed storytelling — the “if and then” fallacy of “speculative ethics” by which an improbable conditional becomes a supposed actual (Nordmann 2007). They are right to be skeptical: hundreds of studies show that humans are overconfident of their beliefs (Moore and Healy 2008), regularly overestimate the probability of detailed visualized scenarios (Tversky and Kahneman 2002), and tend to seek out only information that confirms their current views (Nickerson 1998). AI researchers are not immune from these errors, as evidenced by a history of over-optimistic predictions going back to the 1956 Dartmouth conference on AI (Dreyfus 1972).
Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future, even with little data. When Leo Szilard conceived of the nuclear chain reaction, he realized its destructive potential and filed his patent in a way that kept it secret from the Nazis (Rhodes 1995, 224–225). Svante Arrhenius’ (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997). Norman Rasmussen’s (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).
In planning for the future, how can we be more like Rasmussen and less like the Dartmouth conference? For a start, we can apply the recommendations of cognitive science on how to meliorate overconfidence and other biases (Larrick 2004; Lillienfeld, Ammirati, and Landfield 2009). In keeping with these recommendations, we acknowledge unknowns and do not build models that depend on detailed storytelling. For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress. To avoid nonsense, it should not be necessary to have superhuman reasoning powers; all that should be necessary is to avoid believing we know something when we do not.
One might think such caution would prevent us from concluding anything of interest, but in fact it seems that intelligence explosion may be a convergent outcome of many or most future scenarios. That is, an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.
First, we provide evidence which suggests that, barring global catastrophe and other disruptions to scientific progress, there is a significant probability we will see the creation of digital intelligence within a century. Second, we suggest that the arrival of digital intelligence is likely to lead rather quickly to intelligence explosion. Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.
These questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, can only be to provide a quick survey of the issues involved. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
References for this snippet
Bainbridge 2006 managing nano-bio-info-cogno innovations
Baum Goertzel Goertzel 2011 how long until human-level ai
Bostrom 2003 ethical issues in advanced artificial intelligence
Chalmers 2010 singularity philosophical analysis
Legg 2008 machine super intelligence
Sandberg & Bostrom 2011 machine intelligence survey
Turing 1950 machine intelligence
Good 1965 speculations concerning...
Von neumann 1966 theory of self-reproducing autonomata
Solomonoff 1985 the time scale of artificial intelligence
Vinge 1993 coming technological singularity
Yudkowsky 2001 creating friendly ai
Yudkowsky 2008a negative and positive factor in global risk
Russel Norvig 2010 artificial intelligence a modern approach 3e
Nordman 2007 If and then: a critique of speculative nanoethics
Moore and Healy the trouble with overconfidence
Tversky Kahneman 2002 extensional versus intuitive reasoning, the conjunction fallacy
Nickerson 1998 Confirmation Bias; A Ubiquitous Phenomenon in Many Guises
Dreyfus 1972 what computers can’t do
Rhodes 1995 making of the atomic bomb
Arrhenius 1896 On the Influence of Carbonic Acid in the Air Upon the Temperature
Crawford 1997 Arrhenius’ 1896 model of the greenhouse effect in context
Rasmussen 1975 WASH-1400 report
McGrayne 2011 theory that would not die
Larrick 2004 debiasing
Lillienfeld, Ammirati, and Landfield 2009 giving debiasing away
Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases