I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can’t outsmart it).
I would also say that there will not be any point at which AGIs are “as smart as humans”. The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but “smart as a human” is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).
This is made worse by the impatience of AGI researchers, who will be trying to create an AGI “as smart as a human adult” in a time span of 1 to 6 months, because they’re not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human
I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can’t outsmart it).
I would also say that there will not be any point at which AGIs are “as smart as humans”. The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but “smart as a human” is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).
This is made worse by the impatience of AGI researchers, who will be trying to create an AGI “as smart as a human adult” in a time span of 1 to 6 months, because they’re not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human