I love the idea of an intelligence explosion but I think you have hit on a very strong point here:
In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There’s no guarantee that “how smart the AI is” will keep up with “how hard it is to think of ways to make the AI smarter”; to me, it seems very unlikely.
In fact, we can see from both history and paleontology that when a new breakthrough was made in “biologicial technology” like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a ‘self’ isn’t one meat body, it’s a clade of genes that sail through time and configuration space together—think of a current of bloodlines in spacetime, that we might call a “species” or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.
But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.
We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to… after a lot of development—medieval Italian merchants writing down how to do double entry bookkeeping.
Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild,
often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.
But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.
(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don’t retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)
Now, we’d have expected Turing’s 1930′s work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940′s somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say ‘run’, not ’design—I mean that the new engines could execute algorithms on demand).
The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which… and so on. Layers and layers developed, and then in the 1960′s giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.
And then… although Moore’s law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn’t cause a vast change in our ability to ‘do computer science’. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a ‘linear increase’ by ‘the important measure’.
It may well be that people will just … adapt… to exponentially increasing intellectual capacity by dismissing the ‘easy’ problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as “nonexistent” or “also unimportant”. Right now, computers are executing many many algorithms too complex for any one human mind to follow—and maybe too tedious for any but the most dedicated humans to follow, even in teams—and we still don’t think they are ‘intelligent’. If we can’t recognize an intelligence explosion when we see one under our noses, it is entirely possible we won’t even /notice/ the Singularity when it comes.
If it comes—as jpaulson indicates, there might be a never ending series of ‘tiers’ where we think “Oh past here it’s just clear sailing up to the level of the Infinite Mind of Omega, we’ll be there soon!” but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.
If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it—the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn’t wiser than the average individual human, I would be very surprised, and she apparently either couldn’t create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we’ve been watching unfold around us.
I love the idea of an intelligence explosion but I think you have hit on a very strong point here:
In fact, we can see from both history and paleontology that when a new breakthrough was made in “biologicial technology” like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a ‘self’ isn’t one meat body, it’s a clade of genes that sail through time and configuration space together—think of a current of bloodlines in spacetime, that we might call a “species” or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.
But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.
We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to… after a lot of development—medieval Italian merchants writing down how to do double entry bookkeeping.
Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.
But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.
(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don’t retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)
Now, we’d have expected Turing’s 1930′s work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940′s somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say ‘run’, not ’design—I mean that the new engines could execute algorithms on demand).
The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which… and so on. Layers and layers developed, and then in the 1960′s giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.
And then… although Moore’s law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn’t cause a vast change in our ability to ‘do computer science’. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a ‘linear increase’ by ‘the important measure’.
It may well be that people will just … adapt… to exponentially increasing intellectual capacity by dismissing the ‘easy’ problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as “nonexistent” or “also unimportant”. Right now, computers are executing many many algorithms too complex for any one human mind to follow—and maybe too tedious for any but the most dedicated humans to follow, even in teams—and we still don’t think they are ‘intelligent’. If we can’t recognize an intelligence explosion when we see one under our noses, it is entirely possible we won’t even /notice/ the Singularity when it comes.
If it comes—as jpaulson indicates, there might be a never ending series of ‘tiers’ where we think “Oh past here it’s just clear sailing up to the level of the Infinite Mind of Omega, we’ll be there soon!” but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.
If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it—the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn’t wiser than the average individual human, I would be very surprised, and she apparently either couldn’t create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we’ve been watching unfold around us.