As an intuition pump, imagine that to become “fully human”, whatever that means, the AI needs to have three traits, let’s call them “A”, “B”, and “C”, whatever those could be. It seems unlikely that the AI would gain all those three traits in the same version. More likely, first we will get an AI 1.0 that has the trait “A”, but lacks traits “B” and “C”. At some later time, we will have an AI 2.0 that has traits “A” and “B”, but still lacks “C”.
Now, the important thing is that if the AI 1.0 had the trait “A” at human level, the scientific progress at “A” probably still continued, so the AI 2.0 most likely already has the trait “A” at super-human level. So it has super-human “A”, human-level “B”, but still lacks “C”.
And for the same reason, when later we have an AI 3.0 that finally has all the traits “A”, “B”, and “C” at least at the human level, it will likely already have traits “A” and “B” at super-human level; the trait “A” probably at insanely-super-human level. In other words, the first “fully human” AI will actually already be super-human in some aspects, simply because it is unlikely that all aspects would reach the human level at the same time.
For example, the first “fully human” AI will easily win the chess tournaments, simply because current AIs can already win them. Instead of some average Joe, the first “fully human” AI will be, at least, some kind of super-Kasparov.
How dangerous it is to have some super-human traits, while being “fully human” otherwise? Depends on the trait. Notice how smarter can people act when you simply give them more time, or better memory (e.g. having a paper, or some personal wiki software), or ability to work in groups. If the AI is otherwise like a human, except e.g. 100 times faster, or able to keep a wiki in its head, or able to do real multitasking (i.e. split into a few independent processes, and explore the issue from multiple angles at the same time), that could already make it quite smart. (Imagine how your life would change if at any moment you could magically take an extra hour to think about stuff, having all relevant books in your head, and an invisible team of equally smart discussion partners.) And these are just quite boring examples of human traits; it could also be e.g. 100 times greater ability to recognize patterns; or perhaps some trait we don’t have a word for yet.
Generally, the idea is that “an average human” is a tiny dot on the intelligence scale. If you make small jumps upwards on the scale, the chance that one of your jumps will end exactly at this dot is very small. More likely, your jumps will repeatedly end below this dot, until at some moment the following jump will take over this dot, at some higher place. Humans have many “hardware” limitations that keep them in a narrow interval, such as brains made of meat working at frequency 200 Hz, or heads small enough to allow childbirth. The AI will have none of these. It will have its own hardware limits, but those will follow different rules. So it seems possible that e.g. the AI built in 2022 will work at human equivalent of 20 Hz, and the AI built in 2023 will work at human equivalent of 2000 Hz, simply because someone invented a smart algorithm allowing much faster simulation of neurons, or used a network of thousand computers instead of one supercomputer, or coded the critical parts in C++ instead of Lisp and had them run on GPU, etc.
But perhaps it is enough to accept that the first “fully human” AI could beat you at chess even in its sleep, and ask yourself what is the chance that chess would the only such example.
“For example, the first “fully human” AI will easily win the chess tournaments, simply because current AIs can already win them.” No they can’t. *Chess playing programs* can easily win tournaments, self-driving cars and sentiment analysers can’t. An AGI that had the ability to run a chess playing program would be able to win, but the same applies to humans with the same ability.
As an intuition pump, imagine that to become “fully human”, whatever that means, the AI needs to have three traits, let’s call them “A”, “B”, and “C”, whatever those could be. It seems unlikely that the AI would gain all those three traits in the same version. More likely, first we will get an AI 1.0 that has the trait “A”, but lacks traits “B” and “C”. At some later time, we will have an AI 2.0 that has traits “A” and “B”, but still lacks “C”.
Now, the important thing is that if the AI 1.0 had the trait “A” at human level, the scientific progress at “A” probably still continued, so the AI 2.0 most likely already has the trait “A” at super-human level. So it has super-human “A”, human-level “B”, but still lacks “C”.
And for the same reason, when later we have an AI 3.0 that finally has all the traits “A”, “B”, and “C” at least at the human level, it will likely already have traits “A” and “B” at super-human level; the trait “A” probably at insanely-super-human level. In other words, the first “fully human” AI will actually already be super-human in some aspects, simply because it is unlikely that all aspects would reach the human level at the same time.
For example, the first “fully human” AI will easily win the chess tournaments, simply because current AIs can already win them. Instead of some average Joe, the first “fully human” AI will be, at least, some kind of super-Kasparov.
How dangerous it is to have some super-human traits, while being “fully human” otherwise? Depends on the trait. Notice how smarter can people act when you simply give them more time, or better memory (e.g. having a paper, or some personal wiki software), or ability to work in groups. If the AI is otherwise like a human, except e.g. 100 times faster, or able to keep a wiki in its head, or able to do real multitasking (i.e. split into a few independent processes, and explore the issue from multiple angles at the same time), that could already make it quite smart. (Imagine how your life would change if at any moment you could magically take an extra hour to think about stuff, having all relevant books in your head, and an invisible team of equally smart discussion partners.) And these are just quite boring examples of human traits; it could also be e.g. 100 times greater ability to recognize patterns; or perhaps some trait we don’t have a word for yet.
Generally, the idea is that “an average human” is a tiny dot on the intelligence scale. If you make small jumps upwards on the scale, the chance that one of your jumps will end exactly at this dot is very small. More likely, your jumps will repeatedly end below this dot, until at some moment the following jump will take over this dot, at some higher place. Humans have many “hardware” limitations that keep them in a narrow interval, such as brains made of meat working at frequency 200 Hz, or heads small enough to allow childbirth. The AI will have none of these. It will have its own hardware limits, but those will follow different rules. So it seems possible that e.g. the AI built in 2022 will work at human equivalent of 20 Hz, and the AI built in 2023 will work at human equivalent of 2000 Hz, simply because someone invented a smart algorithm allowing much faster simulation of neurons, or used a network of thousand computers instead of one supercomputer, or coded the critical parts in C++ instead of Lisp and had them run on GPU, etc.
But perhaps it is enough to accept that the first “fully human” AI could beat you at chess even in its sleep, and ask yourself what is the chance that chess would the only such example.
“For example, the first “fully human” AI will easily win the chess tournaments, simply because current AIs can already win them.”
No they can’t. *Chess playing programs* can easily win tournaments, self-driving cars and sentiment analysers can’t. An AGI that had the ability to run a chess playing program would be able to win, but the same applies to humans with the same ability.