I agree 100%. It would be interesting to explore how the term “AGI” has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:
a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions
On the other hand, Stuart Russell testified that AGI means
machines that match or exceed human capabilities in every relevant dimension
so the experts seem to disagree. (On the other hand, Stuart & Russell’s textbook cite Goertzel and Pennachin 2007 when mentioning AGI. Confusing.)
In any case, I think it’s right to say that today’s best language models are AGIs for any of these reasons:
They’re not narrow AIs.
They satisfy the important parts of Goertzel and Pennachin’s definition.
The tasks they can perform are not limited to a “bounded” domain.
I agree 100%. It would be interesting to explore how the term “AGI” has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:
On the other hand, Stuart Russell testified that AGI means
so the experts seem to disagree. (On the other hand, Stuart & Russell’s textbook cite Goertzel and Pennachin 2007 when mentioning AGI. Confusing.)
In any case, I think it’s right to say that today’s best language models are AGIs for any of these reasons:
They’re not narrow AIs.
They satisfy the important parts of Goertzel and Pennachin’s definition.
The tasks they can perform are not limited to a “bounded” domain.
In fact, GPT-2 is an AGI.