I think that the mere development of an AGI with 10-year-old-human intelligence (or even infant-level) would first require stumbling across crucial generalizable principles of how intelligence works. In other words, by this time, there would have to be a working theory of intelligence that could probably be scaled up pretty straightforwardly. Then the only limit to an intelligence explosion would be limitations in hardware or energy resources (this may be more of a limitation while the theory of intelligence is still in its infancy; future designs might be more resource-efficient). I would expect economic pressure and international politics to create the perfect storm of unaligned incentives such that once a general theory is found, even if resource-intensive, you will see an exponential growth (actually sigmoidal as [temporary] hard limits are approached) in the intelligence of the biggest AGI systems.
You might find a rate-limiting step in the time it takes to train an AGI system, though. This would extend the window of opportunity for making any course corrections before superintelligence is reached. However, once it’s trained, it might be easy to make a bunch of copies and combine them into a collective superintelligence, even if training a singleton ASI from scratch would take a much longer time on its own. Let’s hope that a working theory of general alignment comes no later than a working theory of general intelligence.
Thanks for the in-depth answer. The engineer side of me gets leery whenever ‘straightforward real world scaling following a working theory’ is a premise, the likelihood of there being no significant technical obstacles at all, other than resources and energy, seems vanishingly low. A thousand and one factors could impede in realizing even the most perfect theory, much like other complex engineered systems. Possible surprises such as some dependence on the substrate, on the specific arrangement of hardware, on other emergent factors, on software factors, etc...
If there is a general theory of intelligence and it scales well, there are two possibilities. Either we are already in a hardware overhang, and we get an intelligence explosion even without recursive self improvement. Or the compute required is so great that it takes an expensive supercomputer to run, in which case it’ll be a slow takeoff. The probability that we have exactly human intelligence levels of compute seems low to me. Probably we either have way too much or way too little.
I think that the mere development of an AGI with 10-year-old-human intelligence (or even infant-level) would first require stumbling across crucial generalizable principles of how intelligence works. In other words, by this time, there would have to be a working theory of intelligence that could probably be scaled up pretty straightforwardly. Then the only limit to an intelligence explosion would be limitations in hardware or energy resources (this may be more of a limitation while the theory of intelligence is still in its infancy; future designs might be more resource-efficient). I would expect economic pressure and international politics to create the perfect storm of unaligned incentives such that once a general theory is found, even if resource-intensive, you will see an exponential growth (actually sigmoidal as [temporary] hard limits are approached) in the intelligence of the biggest AGI systems.
You might find a rate-limiting step in the time it takes to train an AGI system, though. This would extend the window of opportunity for making any course corrections before superintelligence is reached. However, once it’s trained, it might be easy to make a bunch of copies and combine them into a collective superintelligence, even if training a singleton ASI from scratch would take a much longer time on its own. Let’s hope that a working theory of general alignment comes no later than a working theory of general intelligence.
Thanks for the in-depth answer. The engineer side of me gets leery whenever ‘straightforward real world scaling following a working theory’ is a premise, the likelihood of there being no significant technical obstacles at all, other than resources and energy, seems vanishingly low. A thousand and one factors could impede in realizing even the most perfect theory, much like other complex engineered systems. Possible surprises such as some dependence on the substrate, on the specific arrangement of hardware, on other emergent factors, on software factors, etc...
If there is a general theory of intelligence and it scales well, there are two possibilities. Either we are already in a hardware overhang, and we get an intelligence explosion even without recursive self improvement. Or the compute required is so great that it takes an expensive supercomputer to run, in which case it’ll be a slow takeoff. The probability that we have exactly human intelligence levels of compute seems low to me. Probably we either have way too much or way too little.