You seem to excessively focus on recursive self-improvement to the exclusion of other hard takeoff scenarios, however. As Eliezer noted,
RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a “hard takeoff” aka “AI go FOOM”, but it’s nowhere near being the only such factor. The advent of human intelligence was a discontinuity with the past even without RSI...
That post mentions several other hard takeoff scenarios, e.g.:
Even if an AI’s self-improvement efforts quickly hit a wall, a small number of crucial optimizations, or the capture of a particular important resource, will provide it a massive intelligence advantage over humans. (Has evolutionary precedent in that the genetic differences between humans and chimps are relatively small. )
Parallel hardware overhang: if there’s much more hardware available than it takes to run an AI, an AI could expand itself and thus become more intelligent by simply “growing a bigger brain”, or create an entire society of co-operating AIs.
Serial hardware overhang: an AI running on processors with more serial speed than neurons could be able to e.g. process longer chains of inference instead of relying on cache lookups.
(Also a couple more, but I found those a little vague and couldn’t come up with a good way to summarize them in a few of sentences.)
Ah, thanks for making this point—I notice I’ve recently been treating “recursive self improvement and “hard takeoff” as more or less interchangeable concepts. I don’t think I need to update on this, but I’ll try and use my language more carefully at least.
That post mentions several other hard takeoff scenarios...
Thanks. I will review those scenarios. Just some quick thoughts:
Has evolutionary precedent in that the genetic differences between humans and chimps are relatively small.
On first sight this sounds suspicious. The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it’s not like you’re adding a tiny bit of code and get a superapish intelligence.
The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
Serial hardware overhang: an AI running on processors with more serial speed than neurons could be able to e.g. process longer chains of inference instead of relying on cache lookups.
Humans can process long chains of inferences with the help of tools. The important question is if incorporating those tools into some sort of self-perception, some sort of guiding agency, is vastly superior to humans using a combination of tools and expert systems.
In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems.
If an AI that we invented can hold a complex model in its mind, then we can also simulate such a model by making use of expert systems. Being consciously aware of the model doesn’t make any great difference in principle to what you can do with the model.
Here is what Greg Egan has to say about this in particular:
Whether a mind can synthesise, or simplify, many details into something more tightly knit doesn’t really depend on any form of simultaneous access to the data in something like human working memory. Almost every complex mathematical idea I understand, I only really understand through my ability to scribble things on paper while I’m reading a textbook. No doubt some lucky people have bigger working memories than mine, but my point is that modern humans synthesise concepts all the time from details too complex to hold completely in their own biological minds. Conversely, an AI with a large working memory has … a large working memory, and doesn’t need to reach for a sheet of paper. What it doesn’t have is a magic tool for synthesising everything in its working memory into something qualitatively different.
The quote from Egan would seem to imply that for (literate) humans, too, working memory differences are insignificant: anyone can just use pen and paper to increase their effective working memory. But human intelligence differences do seem to have a major impact on e.g. job performance and life outcomes (e.g. Gottfredson 1997), and human intelligence seems to be very closely linked to—though admittedly not identical with—working memory measures (e.g. Oberauer et al. 2005, Oberauer et al. 2008).
The quote from Egan would seem to imply that for (literate) humans, too, working memory differences are insignificant: anyone can just use pen and paper to increase their effective working memory.
I believe that what he is suggesting is that if you reached a certain plateau then intelligence hits diminishing returns. Would Marilyn vos Savant be proportionally more likely to take over the world, if she tried to, than a 115 IQ individual?
Some anecdotal evidence:
… mathematician John von Neumann, … was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, “only he was fully awake.”
I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me.
… But Einstein’s understanding was deeper even than von Neumann’s. His mind was both more penetrating and more original than von Neumann’s. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci’s brilliance, he never produced anything as original.
Is there evidence that a higher IQ is useful beyond a certain level? The question is not just if it is useful but if it would be worth the effort it would take to amplify your intelligence to that point given that your goal was to overpower lower IQ agent’s. Would a change in personality, more data, a new pair of sensors or some weapons maybe be more useful? If so, would an expected utility maximizer pursue intelligence amplification?
I upvoted for the anecdote, but remember that you’re referring to von Neumann, who invented both the basic architecture of computers and the self-replicating machine. I am not qualified to judge whether or not those are as original as relativity, but they are certainly big.
Would Marilyn vos Savant be proportionally more likely to take over the world, if she tried to, than a 115 IQ individual?
Sure. She’s demonstrated that she can communicate successfully with millions and handle her own affairs quite successfully, generally winning at life. This is comparable to, say, Ronald Reagan’s qualifications. I’d be quite unworried in asserting she’d be more likely to take over the world than a baseline 115 person.
The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own.
Surely humans are Turing complete. I don’t think anybody disputes that.
We know that capabilities extend above our own in all the realms where machines already outstrip our capabilities—and we have a pretty good idea what greater speed, better memory and more memory would do.
Agree with your basic point, but a nit-pick: limited memory and speed (heat death of the universe, etc) put many neat Turing machine computations out of reach of humans (or other systems in our world) barring new physics.
Sure: I meant in the sense of the “colloquial usage” here:
In colloquial usage, the terms “Turing complete” or “Turing equivalent” are used to mean that any real-world general-purpose computer or computer language can approximately simulate any other real-world general-purpose computer or computer language, within the bounds of finite memory—they are linear bounded automaton complete. A universal computer is defined as a device with a Turing complete instruction set, infinite memory, and an infinite lifespan; all general purpose programming languages and modern machine instruction sets are Turing complete, apart from having finite memory.
Good post.
You seem to excessively focus on recursive self-improvement to the exclusion of other hard takeoff scenarios, however. As Eliezer noted,
That post mentions several other hard takeoff scenarios, e.g.:
Even if an AI’s self-improvement efforts quickly hit a wall, a small number of crucial optimizations, or the capture of a particular important resource, will provide it a massive intelligence advantage over humans. (Has evolutionary precedent in that the genetic differences between humans and chimps are relatively small. )
Parallel hardware overhang: if there’s much more hardware available than it takes to run an AI, an AI could expand itself and thus become more intelligent by simply “growing a bigger brain”, or create an entire society of co-operating AIs.
Serial hardware overhang: an AI running on processors with more serial speed than neurons could be able to e.g. process longer chains of inference instead of relying on cache lookups.
(Also a couple more, but I found those a little vague and couldn’t come up with a good way to summarize them in a few of sentences.)
Ah, thanks for making this point—I notice I’ve recently been treating “recursive self improvement and “hard takeoff” as more or less interchangeable concepts. I don’t think I need to update on this, but I’ll try and use my language more carefully at least.
Thanks. I will review those scenarios. Just some quick thoughts:
On first sight this sounds suspicious. The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it’s not like you’re adding a tiny bit of code and get a superapish intelligence.
The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
Humans can process long chains of inferences with the help of tools. The important question is if incorporating those tools into some sort of self-perception, some sort of guiding agency, is vastly superior to humans using a combination of tools and expert systems.
In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems.
If an AI that we invented can hold a complex model in its mind, then we can also simulate such a model by making use of expert systems. Being consciously aware of the model doesn’t make any great difference in principle to what you can do with the model.
Here is what Greg Egan has to say about this in particular:
The quote from Egan would seem to imply that for (literate) humans, too, working memory differences are insignificant: anyone can just use pen and paper to increase their effective working memory. But human intelligence differences do seem to have a major impact on e.g. job performance and life outcomes (e.g. Gottfredson 1997), and human intelligence seems to be very closely linked to—though admittedly not identical with—working memory measures (e.g. Oberauer et al. 2005, Oberauer et al. 2008).
I believe that what he is suggesting is that if you reached a certain plateau then intelligence hits diminishing returns. Would Marilyn vos Savant be proportionally more likely to take over the world, if she tried to, than a 115 IQ individual?
Some anecdotal evidence:
Is there evidence that a higher IQ is useful beyond a certain level? The question is not just if it is useful but if it would be worth the effort it would take to amplify your intelligence to that point given that your goal was to overpower lower IQ agent’s. Would a change in personality, more data, a new pair of sensors or some weapons maybe be more useful? If so, would an expected utility maximizer pursue intelligence amplification?
(A marginal note, bigger is not necessarily better.)
I upvoted for the anecdote, but remember that you’re referring to von Neumann, who invented both the basic architecture of computers and the self-replicating machine. I am not qualified to judge whether or not those are as original as relativity, but they are certainly big.
Sure. She’s demonstrated that she can communicate successfully with millions and handle her own affairs quite successfully, generally winning at life. This is comparable to, say, Ronald Reagan’s qualifications. I’d be quite unworried in asserting she’d be more likely to take over the world than a baseline 115 person.
Surely humans are Turing complete. I don’t think anybody disputes that.
We know that capabilities extend above our own in all the realms where machines already outstrip our capabilities—and we have a pretty good idea what greater speed, better memory and more memory would do.
Agree with your basic point, but a nit-pick: limited memory and speed (heat death of the universe, etc) put many neat Turing machine computations out of reach of humans (or other systems in our world) barring new physics.
Sure: I meant in the sense of the “colloquial usage” here: