Summary: Human genetic engineering could lead to intelligence enhancement that leads to genetic engineers who are better at genetic engineering (and research on pathways to improving intelligence). Which leads to a continuing process of greater and greater intelligence. This iterative process would be a human intelligence explosion.
There’s a view that AI will hit a point where it begins an intelligence explosion: an AI system will be designed that is better at designing AI systems than its designers were. As such, it will be able modify its own design such that it, or a second generation version of it, will be created that is superior to it. And this next version will thus be sufficiently advanced that it can create a more advanced version. Etc. You end up with an iterative process whose next generation progress is based on its current state, and as such an exponential growth, at least until some limiting factor is reached. Hence, intelligence explosion.
This seems like a possible outcome, though the absolute rate of change isn’t clear.
But aside from computer intelligence, there’s another pathway to intelligences improving their own design: humans. With current genome reading technology we are identifying a myriad of genes related to intelligence. While each individual gene gives only a small effect, the interplay of many (hundreds) of such genes can be shown to have very large effects on IQ.
While gene-therapy is in it’s early stages, it’s a current emerging technology that’s undergoing rapid progress. It is currently difficult to modify even individual genes in adult organisms: there are off target effects to worry about, it’s not possible to deliver the genes to every cell, the immune system will attack the viruses used for gene delivery, etc. But there is already progress in solving all of these problems. It’s not crazy to think that within a couple of decades we may be able to safely alter dozens or hundreds of genes in adult humans, and if not 100% of cells, a high enough percentage for effective therapy.
If we imagine such a world, we can see researches making use of such treatments to improve their own intelligence. This then, can lead to clearer thinking, and more creativity, and both more and better ideas. Which then could lead to the next wave of breakthoughs, perhaps in which genes to alter or in other avenues to improve intelligence. And as those were developed and then implemented, said researchers reaping that benefit could then use their newly augmented intellect to iterate the next advances…
A biological intelligence explosion.
It would likely be much more limited than the AI intelligence explosion. Human brains are constrained in various ways (size being an obvious one) that computers are not. And AI could completely start from scratch and use a completely new sort of computing substrate in it’s next design, but that likely wouldn’t be an option for our human researchers who are manipulating already existent, living (so you don’t want to do something that risks killing them) human brains. Nevertheless, even within the constraints, there still seems to be a lot of room for improvement, and each improvement should make the next one more likely.
Of course, maybe this will simply be taboo and not be done. Or maybe AI will come along first.
But then, maybe not.
Now, there’s a question of whether or not worries about the emergence of an artificial super-intelligence might be mirrored in analogous worries about a resulting biological super intelligence. I think that those worries are at least mitigated, though not resolved, in this case, for a few reasons:
As stated above, biological intelligence faces some hard to overcome constraints, like brain size given the human body, and the specific substrate of computation used being neurons. These constraints seem unlikely to be overcome and thus impose hard limits on the max. Progress of a biological intelligence explosion.
The Alignment problem is difficult in part because an AI system will be so alien to us. Humans, on the other hand, are at least capable of understanding human values. While this doesn’t mean that enhanced human intelligences will necessarily be aligned with unenhanced humans, it does mean that the problem may be more tractable.
However, that said, there still seem to be reasons for concern. While there are hard limits on human intelligence, we don’t quite know where they are, and evolution certainly hasn’t reached them. This is because the constraints faced in our ancestral environment have been severely loosened in a modern context. Energy use, for instance, was a major constraint, but food today is very cheap and a brain using even 10 times as much energy could easily be supplied with enough calories for it’s computational work. If that energy use reached 100 times current usage it might requite major changes to other organ systems, but that seems like a feasible late stage development in our intelligence explosion. Survivability in our ancestral environment was also contained heavily by locomotion, but this is a much weaker constraint today. So brain size, for instance, could get much larger before reaching a fundamental limit. There are other things like which tasks brains are specialized for that could similarly be improved. Mathematical aptitude, for instance, probably didn’t undergo very strong selection in the past but could be strongly favoured if it was seen as useful. Etc. All this suggests that while human intelligence would likely reach a limit far before AI did, that limit is quite far from the current level.
Similarly, while the alignment problem may be more tractable in humans, it’s certainly not solved. We have elaborate political systems because we don’t simply trust that our neighbors share our goals, so it seems there’s little reason to assume that the super-intelligent would share the goals of the rest of society in general. Moreover there is an actually harder problem with human-super-intelligence than with machine-super-intelligence, and that is that even at the beginning of the process we have no access to the source code. There’s not chance to try to make sure the “machine’ (ie people) is aligned with us from the beginning. To some extent it may be possible to do this with regulatory oversight of the enhancement process, but this seems a cruder tool than actually designing the system from scratch.
For these reasons I think there are similar concerns with a human intelligence explosion as have been discussed regarding an AI intelligence explosion.
The biological intelligence explosion
Summary: Human genetic engineering could lead to intelligence enhancement that leads to genetic engineers who are better at genetic engineering (and research on pathways to improving intelligence). Which leads to a continuing process of greater and greater intelligence. This iterative process would be a human intelligence explosion.
There’s a view that AI will hit a point where it begins an intelligence explosion: an AI system will be designed that is better at designing AI systems than its designers were. As such, it will be able modify its own design such that it, or a second generation version of it, will be created that is superior to it. And this next version will thus be sufficiently advanced that it can create a more advanced version. Etc. You end up with an iterative process whose next generation progress is based on its current state, and as such an exponential growth, at least until some limiting factor is reached. Hence, intelligence explosion.
This seems like a possible outcome, though the absolute rate of change isn’t clear.
But aside from computer intelligence, there’s another pathway to intelligences improving their own design: humans. With current genome reading technology we are identifying a myriad of genes related to intelligence. While each individual gene gives only a small effect, the interplay of many (hundreds) of such genes can be shown to have very large effects on IQ.
While gene-therapy is in it’s early stages, it’s a current emerging technology that’s undergoing rapid progress. It is currently difficult to modify even individual genes in adult organisms: there are off target effects to worry about, it’s not possible to deliver the genes to every cell, the immune system will attack the viruses used for gene delivery, etc. But there is already progress in solving all of these problems. It’s not crazy to think that within a couple of decades we may be able to safely alter dozens or hundreds of genes in adult humans, and if not 100% of cells, a high enough percentage for effective therapy.
If we imagine such a world, we can see researches making use of such treatments to improve their own intelligence. This then, can lead to clearer thinking, and more creativity, and both more and better ideas. Which then could lead to the next wave of breakthoughs, perhaps in which genes to alter or in other avenues to improve intelligence. And as those were developed and then implemented, said researchers reaping that benefit could then use their newly augmented intellect to iterate the next advances…
A biological intelligence explosion.
It would likely be much more limited than the AI intelligence explosion. Human brains are constrained in various ways (size being an obvious one) that computers are not. And AI could completely start from scratch and use a completely new sort of computing substrate in it’s next design, but that likely wouldn’t be an option for our human researchers who are manipulating already existent, living (so you don’t want to do something that risks killing them) human brains. Nevertheless, even within the constraints, there still seems to be a lot of room for improvement, and each improvement should make the next one more likely.
Of course, maybe this will simply be taboo and not be done. Or maybe AI will come along first.
But then, maybe not.
Now, there’s a question of whether or not worries about the emergence of an artificial super-intelligence might be mirrored in analogous worries about a resulting biological super intelligence. I think that those worries are at least mitigated, though not resolved, in this case, for a few reasons:
As stated above, biological intelligence faces some hard to overcome constraints, like brain size given the human body, and the specific substrate of computation used being neurons. These constraints seem unlikely to be overcome and thus impose hard limits on the max. Progress of a biological intelligence explosion.
The Alignment problem is difficult in part because an AI system will be so alien to us. Humans, on the other hand, are at least capable of understanding human values. While this doesn’t mean that enhanced human intelligences will necessarily be aligned with unenhanced humans, it does mean that the problem may be more tractable.
However, that said, there still seem to be reasons for concern. While there are hard limits on human intelligence, we don’t quite know where they are, and evolution certainly hasn’t reached them. This is because the constraints faced in our ancestral environment have been severely loosened in a modern context. Energy use, for instance, was a major constraint, but food today is very cheap and a brain using even 10 times as much energy could easily be supplied with enough calories for it’s computational work. If that energy use reached 100 times current usage it might requite major changes to other organ systems, but that seems like a feasible late stage development in our intelligence explosion. Survivability in our ancestral environment was also contained heavily by locomotion, but this is a much weaker constraint today. So brain size, for instance, could get much larger before reaching a fundamental limit. There are other things like which tasks brains are specialized for that could similarly be improved. Mathematical aptitude, for instance, probably didn’t undergo very strong selection in the past but could be strongly favoured if it was seen as useful. Etc. All this suggests that while human intelligence would likely reach a limit far before AI did, that limit is quite far from the current level.
Similarly, while the alignment problem may be more tractable in humans, it’s certainly not solved. We have elaborate political systems because we don’t simply trust that our neighbors share our goals, so it seems there’s little reason to assume that the super-intelligent would share the goals of the rest of society in general. Moreover there is an actually harder problem with human-super-intelligence than with machine-super-intelligence, and that is that even at the beginning of the process we have no access to the source code. There’s not chance to try to make sure the “machine’ (ie people) is aligned with us from the beginning. To some extent it may be possible to do this with regulatory oversight of the enhancement process, but this seems a cruder tool than actually designing the system from scratch.
For these reasons I think there are similar concerns with a human intelligence explosion as have been discussed regarding an AI intelligence explosion.