Your AGI is ASI in embryo. There’s basically no difference. Once AI gets to “human level” generally, it will already have far surpassed humans in many domains. It’s also interesting that many of the “narrow tasks” are handled by basically the same deep learning technique which has proven to be very general in scope.
I agree. But then again, that’s true by definition of ‘AGI’ and ‘ASI’.
However, it’s not even clear that the ‘G’ in ‘AGI’ is a well-defined notion in the first place. What does it even mean to be a ‘general’ intelligence? Usually people use the term to mean something like the old definition of ‘Strong AI’, i.e. something that equates to human intelligence in some sense—but even the task human brains implement is not “general” in any real sense. It’s just the peculiar task we call ‘being a human’, the result of an extraordinarily capable aggregate of narrow intelligences!
I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.
Your AGI is ASI in embryo. There’s basically no difference. Once AI gets to “human level” generally, it will already have far surpassed humans in many domains. It’s also interesting that many of the “narrow tasks” are handled by basically the same deep learning technique which has proven to be very general in scope.
I agree. But then again, that’s true by definition of ‘AGI’ and ‘ASI’.
However, it’s not even clear that the ‘G’ in ‘AGI’ is a well-defined notion in the first place. What does it even mean to be a ‘general’ intelligence? Usually people use the term to mean something like the old definition of ‘Strong AI’, i.e. something that equates to human intelligence in some sense—but even the task human brains implement is not “general” in any real sense. It’s just the peculiar task we call ‘being a human’, the result of an extraordinarily capable aggregate of narrow intelligences!
I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.