People might expect there to be lots of AIs quickly, but not each individual AI to grow quickly. Remember, the typical case is that parallelization sucks hard and you get sublinear scaling after a lot of work which often tops out under a relatively small number of computers. That’s why everyone was so unhappy about single-core performance version of Moore’s law breaking down: we don’t want to program parallelly. On top of that, a lot of people have intuitions about diminishing returns & computational complexity which suggest that throwing more computing power at an AI helps ever less.
For most AGI architectures I’ve seen, the computationally expensive work is embarrassingly parallel. Programming solutions embarrassingly parallel problems is quite simple.
Is that generally accepted even just in the AGI community? That’s another idea I usually see exclusively associated with Singulitarian communities. (As you say, it is controversial in general.)
I guess that depends on how “generally accepted” is to be interpreted. It is not as widely accepted as, say, plate tectonics is among geologists. It is certainly a view held among all OpenCog developers, including Goertzel. OpenCog itself is basically designed for recursive self-improvement. I also recall reading an interview with Hugo de Garis where he discussed a similar recursive self-improvement scenario. Hopefully someone can find a link. Talks on friendliness and hard-takeoff risk reduction are common at the AGI conferences. It’s not a universal view however, as Pei Wang’s NARS seems to be predicated on a One True Algorithm for general intelligence, which “obviously” wouldn’t need improvement once found.
Perhaps my view is biased towards the communities I frequent, as my own work is on how to turn OpenCog/CogPrime into a recursively self-improving implementation. So the people I interact with already buy into the recursive self-improvement argument. It is a very straight forward argument however: if you assume that greater-than-human intelligence is possible, and that human-level intelligence is capable of building such a thing, then it is straight forward induction that a human-level artificial computer scientist could also build such a thing, and that either by applying improvements to itself or staging it could do so at an accelerating speed. To such an extent that an AGI researcher accepts the two premises (uncontroversial, I think, albeit not universal), I predict with high probability that they also believe some sort of takeoff scenario is possible. There’s a reason there is significant overlap between the AGI and Singulitarian communities.
Where people differ greatly, I think, is in the limits of (software) self-improvement, the need for interaction with in the environment as part of the learning process, and as a result both the conditions and time-line for a hard-takeoff. Goertzel is working on OpenCog for the same reason that Yudkowsky is working FAI theory, however their own views on the hard-takeoff seem to be opposite sides of the spectrum. Yudkowsky seems to think that whatever limits exist in the efficiency of computational intelligence, it is at the very least many orders of magnitude beyond what we humans will design, and that such improvements can be made with little more than a webcam sensor or access to the internet and introspection—something that will “FOOM” in a matter of days or less. Goertzel on the other hand sees intelligence as navigation of a very complex search space requiring massive amounts of computation, experimental interaction with the environment, and quite possibly some sort of physical embodiment, all things which rate limit advances to taking months or years and constant human interaction. I myself lay somewhere in-between, but more biased towards Goertzel’s view.
For most AGI architectures I’ve seen, the computationally expensive work is embarrassingly parallel. Programming solutions embarrassingly parallel problems is quite simple.
I guess that depends on how “generally accepted” is to be interpreted. It is not as widely accepted as, say, plate tectonics is among geologists. It is certainly a view held among all OpenCog developers, including Goertzel. OpenCog itself is basically designed for recursive self-improvement. I also recall reading an interview with Hugo de Garis where he discussed a similar recursive self-improvement scenario. Hopefully someone can find a link. Talks on friendliness and hard-takeoff risk reduction are common at the AGI conferences. It’s not a universal view however, as Pei Wang’s NARS seems to be predicated on a One True Algorithm for general intelligence, which “obviously” wouldn’t need improvement once found.
Perhaps my view is biased towards the communities I frequent, as my own work is on how to turn OpenCog/CogPrime into a recursively self-improving implementation. So the people I interact with already buy into the recursive self-improvement argument. It is a very straight forward argument however: if you assume that greater-than-human intelligence is possible, and that human-level intelligence is capable of building such a thing, then it is straight forward induction that a human-level artificial computer scientist could also build such a thing, and that either by applying improvements to itself or staging it could do so at an accelerating speed. To such an extent that an AGI researcher accepts the two premises (uncontroversial, I think, albeit not universal), I predict with high probability that they also believe some sort of takeoff scenario is possible. There’s a reason there is significant overlap between the AGI and Singulitarian communities.
Where people differ greatly, I think, is in the limits of (software) self-improvement, the need for interaction with in the environment as part of the learning process, and as a result both the conditions and time-line for a hard-takeoff. Goertzel is working on OpenCog for the same reason that Yudkowsky is working FAI theory, however their own views on the hard-takeoff seem to be opposite sides of the spectrum. Yudkowsky seems to think that whatever limits exist in the efficiency of computational intelligence, it is at the very least many orders of magnitude beyond what we humans will design, and that such improvements can be made with little more than a webcam sensor or access to the internet and introspection—something that will “FOOM” in a matter of days or less. Goertzel on the other hand sees intelligence as navigation of a very complex search space requiring massive amounts of computation, experimental interaction with the environment, and quite possibly some sort of physical embodiment, all things which rate limit advances to taking months or years and constant human interaction. I myself lay somewhere in-between, but more biased towards Goertzel’s view.