I largely agree with Robin’s point that smaller incremental steps are necessary.
But Eliezer’s point about big jumps deserves a reply. The transitions to humans and to atomic bombs do indicate something to think about—and for that matter, so does the emergence of computers.
These all seem to me to be cases where the gradually rising or shifting capacities encounter a new “sweet spot” in the fitness landscape. Other examples are the evolution of flight, or of eyes, both of which happened several times. Or trees, a morphological innovation that arises in multiple botanical lineages.
Note that even for innovations that fit this pattern, e.g. computers and atomic bombs, enormous amounts of incremental development are required before we can get to the sweet spot and start to expand there. (This is also true for biological evolution of course.)
I think most human innovations (tall building, rockets, etc.) are due to incremental accumulation of this sort, rather than finding any big sweet spots.
I should also note that decades before the atomic bomb, the actual production of energy from nuclear fission (geothermal) and fusion (the sun) was clear, if not understood in detail. Similarly the potential of general purpose computers was sensed (e.g. by Ada Lovelace) far before we could build them. This foreknowledge was quite concrete—it involved detailed physical accounts of existing sources of energy, automation of existing computing techniques, etc. So this sort of sweet spot can be understood in quite detailed ways well before we have the technical skills to reach it.
Using this model, if AGI arrives rapidly, it will be because we found a sweet spot, over and above computing. If AGI is feasible in the near future, that implies that we are near such a sweet spot now. If we are near such a sweet spot, we should be able to understand some of its specific form (beyond “it uses Bayesian reasoning”) and the limitations that keep us from getting to it immediately.
I agree with Eliezer that Bayesian methods are “forced”, and I also feel the “Good Old Fashioned AI” folks (certainly including Shank and McCarthy) are not good forecasters, for many reasons.
However Bayesian approaches are at the root of existing impressive AI, such as Thrun’s work on autonomous vehicles. I have been watching this work fairly closely, and it is making the normal sort of incremental progress. If there’s a big sweet spot nearby in the fitness landscape, these practitioners should be able to sense it. They would be well qualified to comment on the prospects for AI, and AGI in particular. I would be very interested in what they have to say.
I largely agree with Robin’s point that smaller incremental steps are necessary.
But Eliezer’s point about big jumps deserves a reply. The transitions to humans and to atomic bombs do indicate something to think about—and for that matter, so does the emergence of computers.
These all seem to me to be cases where the gradually rising or shifting capacities encounter a new “sweet spot” in the fitness landscape. Other examples are the evolution of flight, or of eyes, both of which happened several times. Or trees, a morphological innovation that arises in multiple botanical lineages.
Note that even for innovations that fit this pattern, e.g. computers and atomic bombs, enormous amounts of incremental development are required before we can get to the sweet spot and start to expand there. (This is also true for biological evolution of course.)
I think most human innovations (tall building, rockets, etc.) are due to incremental accumulation of this sort, rather than finding any big sweet spots.
I should also note that decades before the atomic bomb, the actual production of energy from nuclear fission (geothermal) and fusion (the sun) was clear, if not understood in detail. Similarly the potential of general purpose computers was sensed (e.g. by Ada Lovelace) far before we could build them. This foreknowledge was quite concrete—it involved detailed physical accounts of existing sources of energy, automation of existing computing techniques, etc. So this sort of sweet spot can be understood in quite detailed ways well before we have the technical skills to reach it.
Using this model, if AGI arrives rapidly, it will be because we found a sweet spot, over and above computing. If AGI is feasible in the near future, that implies that we are near such a sweet spot now. If we are near such a sweet spot, we should be able to understand some of its specific form (beyond “it uses Bayesian reasoning”) and the limitations that keep us from getting to it immediately.
I agree with Eliezer that Bayesian methods are “forced”, and I also feel the “Good Old Fashioned AI” folks (certainly including Shank and McCarthy) are not good forecasters, for many reasons.
However Bayesian approaches are at the root of existing impressive AI, such as Thrun’s work on autonomous vehicles. I have been watching this work fairly closely, and it is making the normal sort of incremental progress. If there’s a big sweet spot nearby in the fitness landscape, these practitioners should be able to sense it. They would be well qualified to comment on the prospects for AI, and AGI in particular. I would be very interested in what they have to say.