It’s hard to imagine a “general intelligence” getting stuck at the level of a 10 year child in all areas—certainly it will have an ability to interface with hardware that allows it to perform rapid calculations or run other super-human algorithms.
But there are some arguments that suggest intelligence scaling at an exponential rate can’t go on indefinitely and in fact limitations to exponential growth (“foom”) may be hit very soon after AGI is developed, so basically foom is impossible. For instance, see this article by Francois Chollet: https://medium.com/@francois.chollet/the-impossibility-of-intelligenceexplosion-5be4a9eda6ec
He makes a number of interesting points. For instance, he notes the slow development of science, despite exponentially more resources going into it. He also notes that science and other areas of human endeavor have recursive self improvement in them, but they seem to be growing linearly, not exponentially.
Another point is that some (eg chaotic) physical systems are just impossible to predict over time scales of days or longer, even for superintelligent AI with vast computational resources. So there are some limitations there, at least.
Thanks for the links. It may be that the development of science, and of all technical endeavours in general, follow a pattern of punctuated equilibrium, that is sub linear growth, or even regression, for the vast majority of the time, interspersed by brief periods of tremendous change.
It’s hard to imagine a “general intelligence” getting stuck at the level of a 10 year child in all areas—certainly it will have an ability to interface with hardware that allows it to perform rapid calculations or run other super-human algorithms.
But there are some arguments that suggest intelligence scaling at an exponential rate can’t go on indefinitely and in fact limitations to exponential growth (“foom”) may be hit very soon after AGI is developed, so basically foom is impossible. For instance, see this article by Francois Chollet:
https://medium.com/@francois.chollet/the-impossibility-of-intelligenceexplosion-5be4a9eda6ec
He makes a number of interesting points. For instance, he notes the slow development of science, despite exponentially more resources going into it. He also notes that science and other areas of human endeavor have recursive self improvement in them, but they seem to be growing linearly, not exponentially.
Another point is that some (eg chaotic) physical systems are just impossible to predict over time scales of days or longer, even for superintelligent AI with vast computational resources. So there are some limitations there, at least.
The other related reference I would recommend is this interview with Robin Hanson: https://aiimpacts.org/conversation-with-robin-hanson/
Thanks for the links. It may be that the development of science, and of all technical endeavours in general, follow a pattern of punctuated equilibrium, that is sub linear growth, or even regression, for the vast majority of the time, interspersed by brief periods of tremendous change.