Is it a fair assumption to say the Intelligence Spectrum is a core concept that underpins AGI safety . I often feel the idea is the first buy in to introduction AGI safety text. And due to my prior beliefs and lack of emphasis on it in Safety introductions I often bounce off the entire field.
I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can’t outsmart it).
I would also say that there will not be any point at which AGIs are “as smart as humans”. The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but “smart as a human” is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).
This is made worse by the impatience of AGI researchers, who will be trying to create an AGI “as smart as a human adult” in a time span of 1 to 6 months, because they’re not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human
If by intelligence spectrum you mean variations in capability across different generally intelligent minds, such that there can be minds that are dramatically more capable (and thus more dangerous): yes, it’s pretty important.
If it were impossible to make an AI more capable than the most capable human no matter what software or hardware architectures we used, and no matter how much hardware we threw at it, AI risk would be far less concerning.
But it really seems like AI can be smarter than humans. Narrow AIs (like MuZero) already outperform all humans at some tasks, and more general AIs like large language models are making remarkable and somewhat spooky progress.
Focusing on a very simple case, note that using bigger, faster computers tends to let you do more. Video games are a lot more realistic than they used to be. Physics simulators can simulate more. Machine learning involves larger networks. Likewise, you can run the same software faster. Imagine you had an AGI that demonstrated performance nearly identical to that of a reasonably clever human. What happens when you use enough hardware that it runs 1000 times faster than real time, compared to a human? Even if there are no differences in the quality of individual insights or the generality of its thought, just being able to think fast alone is going to make it far, far more capable than a human.
Is it a fair assumption to say the Intelligence Spectrum is a core concept that underpins AGI safety . I often feel the idea is the first buy in to introduction AGI safety text. And due to my prior beliefs and lack of emphasis on it in Safety introductions I often bounce off the entire field.
I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can’t outsmart it).
I would also say that there will not be any point at which AGIs are “as smart as humans”. The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but “smart as a human” is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).
This is made worse by the impatience of AGI researchers, who will be trying to create an AGI “as smart as a human adult” in a time span of 1 to 6 months, because they’re not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human
If by intelligence spectrum you mean variations in capability across different generally intelligent minds, such that there can be minds that are dramatically more capable (and thus more dangerous): yes, it’s pretty important.
If it were impossible to make an AI more capable than the most capable human no matter what software or hardware architectures we used, and no matter how much hardware we threw at it, AI risk would be far less concerning.
But it really seems like AI can be smarter than humans. Narrow AIs (like MuZero) already outperform all humans at some tasks, and more general AIs like large language models are making remarkable and somewhat spooky progress.
Focusing on a very simple case, note that using bigger, faster computers tends to let you do more. Video games are a lot more realistic than they used to be. Physics simulators can simulate more. Machine learning involves larger networks. Likewise, you can run the same software faster. Imagine you had an AGI that demonstrated performance nearly identical to that of a reasonably clever human. What happens when you use enough hardware that it runs 1000 times faster than real time, compared to a human? Even if there are no differences in the quality of individual insights or the generality of its thought, just being able to think fast alone is going to make it far, far more capable than a human.