I don’t think anyone really expects this sort of scenario, but it does make for some nice safe science fiction stories where humans get to play a meaningful role in the outcome of the plot.
Personally I think there are a few pretty major things working against it.
It seems likely that if we can get to chimpanzee-equivalent capability at all (about the minimum I’d call AGI), scaling up by a factor of 10 with only relatively few architectural tweaks will give something at least as generally capable as a human brain. Human brains are only about 4x the size and not apparently very much more complex per unit mass than the other great apes. Whatever the differences are, they developed in something like 1/1000th of the total history of our species. We’re far too homogeneous in ability (on an inter-species absolute intelligence scale) with too recent evolution for anything to be fundamentally more complex about our brains compared with apes. If the apes had stagnated in intelligence for a billion years before making some intelligence breakthrough to us in a much shorter time, I may have had a different opinion. The evidence seems to point toward a change in neuron scaling in primates that meant a cheaper increase in neuron counts and not much else. As soon as this lowered the marginal cost of supporting more neurons below the benefits of having more neurons, our ancestors fairly steadily increased in both brain size and intelligence from there to here, or at least as steadily as evolution ever gets.
If there are fundamental barriers, then I expect them to be at least as far above us as we are above chimpanzees because there are no signs that we’re anywhere near as good as it gets. We’re most likely the stupidest possible species capable of nontrivial technology. If we weren’t then we’d have most likely found evidence of earlier, stupider species on Earth that did it before us.
While I am not certain, I suspect that even otherwise chimpanzee equivalent AGIs enhanced with the narrow superhuman capabilities we have already built today might be able to outsmart us even while being behind us in some ways. While humans too can make use of narrow superhuman AI capabilities, we still have to use them as external tools limited by external interfaces, instead of integrated into our minds from birth and as automatic to us as focusing our eyes. There is every reason to expect that the relative gains would be very much greater.
Even if none of those are true, and general intelligence stops at 10-year-old human capability and they can’t directly use our existing superhuman tools better than we can, I wouldn’t bet heavily against a the possibility that merely scaling up speed 100 times—studying and planning for up to a subjective century each year—could let them work out how to get through the barrier to better capabilities in a decade or two. Similarly if they could learn in concert with others, all of them benefiting in some way directly rather than via comparatively slow linear language. There may be many other ways in which we are biologically limited but don’t think of it as being important, because everything else we’ve ever known is too. Some AGIs might not share those limits, and work around their deficiencies in some respects by using capabilities that nothing else on Earth has.
Thanks JBlack, those are some convincing points. Especially the likelihood that even a chimpanzee level intelligence directly interfaced to present day supercomputers would likely yield tangible performance greater than any human in many ways. Though perhaps the danger is lessened if for the first few decades the energy and space requirements are, at a minimum, equal to a present day supercomputing facility. It’s a big and obvious enough ’bright line’ so to speak.
I don’t think anyone really expects this sort of scenario, but it does make for some nice safe science fiction stories where humans get to play a meaningful role in the outcome of the plot.
Personally I think there are a few pretty major things working against it.
It seems likely that if we can get to chimpanzee-equivalent capability at all (about the minimum I’d call AGI), scaling up by a factor of 10 with only relatively few architectural tweaks will give something at least as generally capable as a human brain. Human brains are only about 4x the size and not apparently very much more complex per unit mass than the other great apes. Whatever the differences are, they developed in something like 1/1000th of the total history of our species. We’re far too homogeneous in ability (on an inter-species absolute intelligence scale) with too recent evolution for anything to be fundamentally more complex about our brains compared with apes. If the apes had stagnated in intelligence for a billion years before making some intelligence breakthrough to us in a much shorter time, I may have had a different opinion. The evidence seems to point toward a change in neuron scaling in primates that meant a cheaper increase in neuron counts and not much else. As soon as this lowered the marginal cost of supporting more neurons below the benefits of having more neurons, our ancestors fairly steadily increased in both brain size and intelligence from there to here, or at least as steadily as evolution ever gets.
If there are fundamental barriers, then I expect them to be at least as far above us as we are above chimpanzees because there are no signs that we’re anywhere near as good as it gets. We’re most likely the stupidest possible species capable of nontrivial technology. If we weren’t then we’d have most likely found evidence of earlier, stupider species on Earth that did it before us.
While I am not certain, I suspect that even otherwise chimpanzee equivalent AGIs enhanced with the narrow superhuman capabilities we have already built today might be able to outsmart us even while being behind us in some ways. While humans too can make use of narrow superhuman AI capabilities, we still have to use them as external tools limited by external interfaces, instead of integrated into our minds from birth and as automatic to us as focusing our eyes. There is every reason to expect that the relative gains would be very much greater.
Even if none of those are true, and general intelligence stops at 10-year-old human capability and they can’t directly use our existing superhuman tools better than we can, I wouldn’t bet heavily against a the possibility that merely scaling up speed 100 times—studying and planning for up to a subjective century each year—could let them work out how to get through the barrier to better capabilities in a decade or two. Similarly if they could learn in concert with others, all of them benefiting in some way directly rather than via comparatively slow linear language. There may be many other ways in which we are biologically limited but don’t think of it as being important, because everything else we’ve ever known is too. Some AGIs might not share those limits, and work around their deficiencies in some respects by using capabilities that nothing else on Earth has.
Thanks JBlack, those are some convincing points. Especially the likelihood that even a chimpanzee level intelligence directly interfaced to present day supercomputers would likely yield tangible performance greater than any human in many ways. Though perhaps the danger is lessened if for the first few decades the energy and space requirements are, at a minimum, equal to a present day supercomputing facility. It’s a big and obvious enough ’bright line’ so to speak.