The G stands for general, so I don’t by see why you you would need a bunch of special purpose definitions of AGI. You seem to disbelieve in general intelligence for the reasons in this footnote:-
With due respect to Kurzweil, I think his definition is rather flawed, to be honest (personal rant incoming). Name me a single human who can “successfully perform any intellectual task that a human being can.” Try to find even one person who is successful at any task anyone can possibly do. Such a person does not exist. All humans are better in some areas and worse in others, if only because we do not have infinite time to learn every possible skillset (though to be fair, most other definitions on this list run into the same issue). See the closing paragraph of this post for more.
… But general intelligence doesn’t have to be a claim about a specific humans, it can be a claim about humans in aggregate … and still be distinct from superintelligence.
If it’s true that no human can perform every intellectual task doable by at least one human (which is probably the case) then “can perform any intellectual task a human can” can’t be a reasonable criterion for calling a specific AI system “intelligent” or “generally intelligent” or whatever. So, to whatever extent Kurzweil’s criterion is intended to be used that way, maybe it’s a bad criterion.
As you say, we can apply the term to populations rather than individuals, and maybe it’s interesting to ask not “when will there be a computer system that can do whatever humans can?” but “when will computer systems, collectively, be able to do all the things humans can, collectively?”.
then “can perform any intellectual task a human can” can’t be a reasonable criterion for calling a specific AI system “intelligent” or “generally intelligent” or whatever.
AI and AGI aren’t supposed to be synonyms. Defining AGI in terms of a specific humans capabilities is pretty pointless. Defining in terms of an average or a maximum distinguishes AGI from AI ( and ASI).
As you say, we can apply the term to populations rather than individuals, and maybe it’s interesting to ask not “when will there be a computer system that can do whatever humans can?” but “when will computer systems, collectively, be able to do all the things humans can, collectively?
The G stands for general, so I don’t by see why you you would need a bunch of special purpose definitions of AGI. You seem to disbelieve in general intelligence for the reasons in this footnote:-
… But general intelligence doesn’t have to be a claim about a specific humans, it can be a claim about humans in aggregate … and still be distinct from superintelligence.
If it’s true that no human can perform every intellectual task doable by at least one human (which is probably the case) then “can perform any intellectual task a human can” can’t be a reasonable criterion for calling a specific AI system “intelligent” or “generally intelligent” or whatever. So, to whatever extent Kurzweil’s criterion is intended to be used that way, maybe it’s a bad criterion.
As you say, we can apply the term to populations rather than individuals, and maybe it’s interesting to ask not “when will there be a computer system that can do whatever humans can?” but “when will computer systems, collectively, be able to do all the things humans can, collectively?”.
AI and AGI aren’t supposed to be synonyms. Defining AGI in terms of a specific humans capabilities is pretty pointless. Defining in terms of an average or a maximum distinguishes AGI from AI ( and ASI).
I don’t see why they can’t both be interesting