Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
_____
From here to digital intelligence
Our first step is to survey the evidence suggesting that, barring global catastrophe and other disruptions to scientific progress,1 there is a significant probability we will see the creation of digital intelligence2 within a century.
Why focus on digital intelligence instead of, say, the cognitive enhancement of biological humans? As we discuss in a later section, digital intelligence has certain advantages (e.g. copyability) that make it likely to lead to intelligence explosion.
Below, we discuss the different types of digital intelligence, what kinds of progress are likely to push us closer to digital intelligence, and how to estimate the time at which digital intelligence will arrive.
Types of digital intelligence
To count as a “digital intelligence,” an artificial agent must have at least a human-level general capacity3 to achieve its goals in a wide range of environments, including novel ones.4
IBM’s Jeopardy!-playing Watson computer is not a digital intelligence in this sense because it can only solve a narrow problem. Imagine instead a machine that can invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate new environments on the fly. A digital intelligence need not be sentient, though, so long as it has a human-level capacity to achieve goals in a wide variety of environments.
There are many types of digital intelligence. To name just four:
The code of a transparent AI is written explicitly by, and largely understood by, its programmers.5
An opaque AI is not transparent to its creators. For example it could be, like the human brain, a messy ensemble of cognitive modules. In an AI, these modules might be written by different teams for different purposes using different languages and approaches.
A whole brain emulation (WBE) is a computer emulation of the brain structures required to functionally reproduce human thought and perhaps consciousness (Sandberg and Bostrom 2008). We need not understand the detailed operation of a brain to reproduce it functionally on a computing substrate.
A hybrid AI is a mix of any two or three of the above types of digital intelligence (transparent AI, opaque AI, and WBE).
Notes for this snippet
1 By “disruptions to scientific progress” we have in mind “external” disruptions like catastrophe or a global totalitarianism that prevents the further progress of science (Caplan, 2008). We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.
2 We introduce the term “digital intelligence” because we want a new term that refers to both human-level AI and whole brain emulations, and we don’t wish to expand the meaning of the common term “AI.”
3 The notion of “human-level intelligence” is fuzzy, but nevertheless we can identify clear examples of intelligences below the human level (rhesus monkeys) and above the human level (a human brain running at 1000 times its normal speed). A human-level intelligence is any intelligent system not clearly below or above the human level.
4 Legg (2008) argues that many definitions of intelligence converge on this idea. We mean to endorse this informal definition, not Legg’s attempt to formalize intelligence in a later section of his manuscript.
5 Examples include many of today’s reinforcement learners (Sutton and Barto 1998), and also many abstract models such as AIXI (Hutter 2004), Gödel machines (Schmidhuber 2007), and Dewey’s (2011) “implemented agents.”
Intelligence Explosion analysis draft: types of digital intelligence
Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
_____
From here to digital intelligence
Our first step is to survey the evidence suggesting that, barring global catastrophe and other disruptions to scientific progress,1 there is a significant probability we will see the creation of digital intelligence2 within a century.
Why focus on digital intelligence instead of, say, the cognitive enhancement of biological humans? As we discuss in a later section, digital intelligence has certain advantages (e.g. copyability) that make it likely to lead to intelligence explosion.
Below, we discuss the different types of digital intelligence, what kinds of progress are likely to push us closer to digital intelligence, and how to estimate the time at which digital intelligence will arrive.
Types of digital intelligence
To count as a “digital intelligence,” an artificial agent must have at least a human-level general capacity3 to achieve its goals in a wide range of environments, including novel ones.4
IBM’s Jeopardy!-playing Watson computer is not a digital intelligence in this sense because it can only solve a narrow problem. Imagine instead a machine that can invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate new environments on the fly. A digital intelligence need not be sentient, though, so long as it has a human-level capacity to achieve goals in a wide variety of environments.
There are many types of digital intelligence. To name just four:
The code of a transparent AI is written explicitly by, and largely understood by, its programmers.5
An opaque AI is not transparent to its creators. For example it could be, like the human brain, a messy ensemble of cognitive modules. In an AI, these modules might be written by different teams for different purposes using different languages and approaches.
A whole brain emulation (WBE) is a computer emulation of the brain structures required to functionally reproduce human thought and perhaps consciousness (Sandberg and Bostrom 2008). We need not understand the detailed operation of a brain to reproduce it functionally on a computing substrate.
A hybrid AI is a mix of any two or three of the above types of digital intelligence (transparent AI, opaque AI, and WBE).
Notes for this snippet
1 By “disruptions to scientific progress” we have in mind “external” disruptions like catastrophe or a global totalitarianism that prevents the further progress of science (Caplan, 2008). We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.
2 We introduce the term “digital intelligence” because we want a new term that refers to both human-level AI and whole brain emulations, and we don’t wish to expand the meaning of the common term “AI.”
3 The notion of “human-level intelligence” is fuzzy, but nevertheless we can identify clear examples of intelligences below the human level (rhesus monkeys) and above the human level (a human brain running at 1000 times its normal speed). A human-level intelligence is any intelligent system not clearly below or above the human level.
4 Legg (2008) argues that many definitions of intelligence converge on this idea. We mean to endorse this informal definition, not Legg’s attempt to formalize intelligence in a later section of his manuscript.
5 Examples include many of today’s reinforcement learners (Sutton and Barto 1998), and also many abstract models such as AIXI (Hutter 2004), Gödel machines (Schmidhuber 2007), and Dewey’s (2011) “implemented agents.”
References for this snippet
Sandberg & Bostrom 2008 whole brain emulation
Caplan 2008 the totalitarian threat
Horgan 1997 end of science
Legg 2008 machine superintelligence
Sutton & Barto 1998 reinforcement learning
Hutter 2004 universal ai
Schmidhuber 2007 godel machines
Dewey 2011 learning what to value