Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven’t seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being’s intelligence from average to Alan Turing’s level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But “nil” and “basic computer” are strictly stupider than “average human” and “Alan Turing,” respectively. It’s evidence that a curve like f(x, x-1) - the difficulty of creating a being slightly stupider than yourself given your intelligence level—decreases relatively quickly. But the shapes of f(x, x+1) and f(x, x-1) are unrelated. The one can increase exponentially while the other decays exponentially. (Proof: set f(x, y) = e^(y^2 - x^2).)
See also JoshuaZ’s insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one’s own. The claim that “the smarter you are, the better you are at designing intelligences” can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).
I see the two claims conflated shockingly often, e.g., in Bostrom’s article, where he simply states:
Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.
and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn’t state that outright, but the sentence is at the beginning of the section entitled “Once there is human-level AI there will soon be superintelligence.”) That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn’t imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
Here’s a line of reasoning that seems to suggest the possibility of an interesting region of decreasing f(x, x+1). It focuses on human evolution and evolutionary algorithms.
Human intelligence appeared relatively recently through an evolutionary process. There doesn’t seem to be much reason to believe that if the evolutionary process were allowed to continue (instead of being largely pre-empted by memetic and technological evolution) that future hominids wouldn’t be considerably smarter. Suppose that evolutionary algorithms can be used to design a human-equivalent intelligence with minimal supervision/intervention by truly intelligent-design methods. In that case, we would expect with some substantial probability that carrying the evolution forward would lead to more intelligence. Since the evolutionary experiment is largely driven by brute-force computation, any increase in computing power underlying the evolutionary “playing field” would increase the rate of increase of intelligence of the evolving population.
I’m not an expert on or even practitioner of evolutionary design, so please criticize and correct this line of reasoning.
I agree there’s good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would’ve ended up more intelligent on average. What’s substantially less clear is whether we would’ve ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn’t seem so obvious that it’s wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it’s extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.
See also JoshuaZ’s insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Those insights are relevant and interesting for the type of self-improvement feedback loop which assumes unlimited improvement potential in algorithmic efficiency. However, there’s the much more basic intelligence explosion which is just hardware driven.
Brain architecture certainly limits maximum practical intelligence, but does not determine it. Just as the relative effectiveness of current chess AI systems is limited by hardware but determined by software, human intelligence is limited by the brain but determined by acquired knowledge.
The hardware is qualitatively important only up to the point where you have something that is turing-complete. Beyond that the differences become quantitative: memory constrains program size, performance limits execution speed.
Even so, having AGI’s that are ‘just’ at human level IQ can still quickly lead to an intelligence explosion by speeding them up by a factor of a million and then creating trillions of them. IQ is a red herring anyway. It’s a baseless anthropocentric measure that doesn’t scale to the performance domains of super-intelligences. If you want a hard quantitative measure, simply use standard computational measures: ie a human brain is a roughly < 10^15 circuit and at most does <10^18 circuit ops per second.
Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven’t seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being’s intelligence from average to Alan Turing’s level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But “nil” and “basic computer” are strictly stupider than “average human” and “Alan Turing,” respectively. It’s evidence that a curve like f(x, x-1) - the difficulty of creating a being slightly stupider than yourself given your intelligence level—decreases relatively quickly. But the shapes of f(x, x+1) and f(x, x-1) are unrelated. The one can increase exponentially while the other decays exponentially. (Proof: set f(x, y) = e^(y^2 - x^2).)
See also JoshuaZ’s insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one’s own. The claim that “the smarter you are, the better you are at designing intelligences” can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).
I see the two claims conflated shockingly often, e.g., in Bostrom’s article, where he simply states:
and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn’t state that outright, but the sentence is at the beginning of the section entitled “Once there is human-level AI there will soon be superintelligence.”) That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn’t imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
Here’s a line of reasoning that seems to suggest the possibility of an interesting region of decreasing f(x, x+1). It focuses on human evolution and evolutionary algorithms.
Human intelligence appeared relatively recently through an evolutionary process. There doesn’t seem to be much reason to believe that if the evolutionary process were allowed to continue (instead of being largely pre-empted by memetic and technological evolution) that future hominids wouldn’t be considerably smarter. Suppose that evolutionary algorithms can be used to design a human-equivalent intelligence with minimal supervision/intervention by truly intelligent-design methods. In that case, we would expect with some substantial probability that carrying the evolution forward would lead to more intelligence. Since the evolutionary experiment is largely driven by brute-force computation, any increase in computing power underlying the evolutionary “playing field” would increase the rate of increase of intelligence of the evolving population.
I’m not an expert on or even practitioner of evolutionary design, so please criticize and correct this line of reasoning.
I agree there’s good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would’ve ended up more intelligent on average. What’s substantially less clear is whether we would’ve ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn’t seem so obvious that it’s wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it’s extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.
Those insights are relevant and interesting for the type of self-improvement feedback loop which assumes unlimited improvement potential in algorithmic efficiency. However, there’s the much more basic intelligence explosion which is just hardware driven.
Brain architecture certainly limits maximum practical intelligence, but does not determine it. Just as the relative effectiveness of current chess AI systems is limited by hardware but determined by software, human intelligence is limited by the brain but determined by acquired knowledge.
The hardware is qualitatively important only up to the point where you have something that is turing-complete. Beyond that the differences become quantitative: memory constrains program size, performance limits execution speed.
Even so, having AGI’s that are ‘just’ at human level IQ can still quickly lead to an intelligence explosion by speeding them up by a factor of a million and then creating trillions of them. IQ is a red herring anyway. It’s a baseless anthropocentric measure that doesn’t scale to the performance domains of super-intelligences. If you want a hard quantitative measure, simply use standard computational measures: ie a human brain is a roughly < 10^15 circuit and at most does <10^18 circuit ops per second.