Let’s define AGI as “AI that is generally intelligent, i.e. it isn’t limited to a narrow domain, but can do stuff and think stuff about a very wide range of domains.”
Human-level AGI (Sometimes confusingly shortened to “AGI”) is AGI that is similarly competent to humans at a similarly large-and-useful range of domains.
My stance is that GPT-3 is AGI, but not human-level AGI. (Not even close).
I’d also add agency as an important concept—an AI is agenty if it behaves in a goal-directed way. I don’t think GPT-3 is agenty. But unsurprisingly, lots of game-playing AIs are agenty. AlphaGo was an agenty narrow AI. I think these new ‘agents’ trained by DeepMind are agenty AGI, but just extremely crappy agenty AGI. There is a wide domain they can perform in—but it’s not that wide. Not nearly as wide as the human range. And they also aren’t that competent even within the domain.
Thing is though, as we keep making these things bigger and train them for longer on more diverse data… it seems that they will become more competent, and the range of things they do will expand. Eventually we’ll get to human-level AGI, though it’s another question exactly how long that’ll take.
Let’s define AGI as “AI that is generally intelligent, i.e. it isn’t limited to a narrow domain, but can do stuff and think stuff about a very wide range of domains.”
Human-level AGI (Sometimes confusingly shortened to “AGI”) is AGI that is similarly competent to humans at a similarly large-and-useful range of domains.
My stance is that GPT-3 is AGI, but not human-level AGI. (Not even close).
I’d also add agency as an important concept—an AI is agenty if it behaves in a goal-directed way. I don’t think GPT-3 is agenty. But unsurprisingly, lots of game-playing AIs are agenty. AlphaGo was an agenty narrow AI. I think these new ‘agents’ trained by DeepMind are agenty AGI, but just extremely crappy agenty AGI. There is a wide domain they can perform in—but it’s not that wide. Not nearly as wide as the human range. And they also aren’t that competent even within the domain.
Thing is though, as we keep making these things bigger and train them for longer on more diverse data… it seems that they will become more competent, and the range of things they do will expand. Eventually we’ll get to human-level AGI, though it’s another question exactly how long that’ll take.