There already are general AIs. They just are not powerful enough yet to count as True AGIs.
Can you say what you have in mind as the defining characteristics of a True AGI?
It’s becoming a pet peeve of mine how often people these days use the term “AGI” w/o defining it. Given that, by the broadest definition, LLMs already are AGIs, whenever someone uses the term and means to exclude current LLMs, it seems to me that they’re smuggling in a bunch of unstated assumptions about what counts as an AGI or not.
Here are some of the questions I have for folks that distinguish between current systems and future “AGI”:
Is it about just being more generally competent (s.t. GPT-X will hit the bar, if it does a bit better on all our current benchmarks, w/o any major architectural changes)?
Is it about being always on, and having continual trains of thought, w/ something like long-term memory, rather than just responding to each prompt in isolation?
Is it about being formulated more like an agent, w/ clearly defined goals, rather than like a next-token predictor?
If so, what if the easiest way to get agent-y behavior is via a next-token (or other sensory modality) predictor that simulates an agent — do the simulations need to pass a certain fidelity threshold before we call it AGI?
What if we have systems with a hodge-podge of competing drives (like a The Sims character) and learned behaviors, that in any given context may be more-or-less goal-directed, but w/o a well-specified over-arching utility function (just like any human or animal) — is that an AGI?
Is it about being superhuman at all tasks, rather than being superhuman at some and subhuman at others (even though there’s likely plenty of risk from advanced systems well before they’re superhuman at absolutely everything)?
Given all these ambiguities, I’m tempted to suggest we should in general taboo “AGI”, and use more specific phrases in its place. (Or at least, make a note of exactly which definition we’re using if we do refer to “AGI”.)
FWIW I put a little discussion of (part of) my own perspective here. I have definitely also noticed that using the term “AGI” without further elaboration has become a lot more problematic recently. :(
I use “AGI” to refer to autonomous ability to eventually bootstrap to the singularity (far future tech) without further nontrivial human assistance (apart from keeping the lights on and fixing out-of-memory bugs and such, if the AGI is initially too unskilled to do it on their own). The singularity is what makes AGI important, so that’s the natural defining condition. AGI in this sense is also the point when things start happening muchfaster.
Random reminder that the abilities listed here as lacking, but functionally very attractive to reproduce in AI (offline processing, short and long term memory, setting goals, thinking across contexts, generating novel and flexible rational solutions, internal loops), are abilities closely related to our current understanding of the evolutionary development of consciousness for problem solving in biological life. And that optimising for more human-like problem solving through pressure for results and random modifications comes with a still unclear risk of pushing AI down the same path to sentience. Sentience is a functional trait, we, and many other unrelated animals, have it for a reason, we need it to think the way we do, and have been unable to find a cheaper workaround, and it inevitably evolved multiple times without an intentional designer on this planet under problem solving pressure. It is no mystical or spiritual thing, it is a brain process that enables better behaviour. We do not understand why this path kept being taken in biological organisms, we do not understand if AI has an alternate path open, we are just chucking the same demand at it and letting it adapt to solve it.
Can you say what you have in mind as the defining characteristics of a True AGI?
It’s becoming a pet peeve of mine how often people these days use the term “AGI” w/o defining it. Given that, by the broadest definition, LLMs already are AGIs, whenever someone uses the term and means to exclude current LLMs, it seems to me that they’re smuggling in a bunch of unstated assumptions about what counts as an AGI or not.
Here are some of the questions I have for folks that distinguish between current systems and future “AGI”:
Is it about just being more generally competent (s.t. GPT-X will hit the bar, if it does a bit better on all our current benchmarks, w/o any major architectural changes)?
Is it about being always on, and having continual trains of thought, w/ something like long-term memory, rather than just responding to each prompt in isolation?
Is it about being formulated more like an agent, w/ clearly defined goals, rather than like a next-token predictor?
If so, what if the easiest way to get agent-y behavior is via a next-token (or other sensory modality) predictor that simulates an agent — do the simulations need to pass a certain fidelity threshold before we call it AGI?
What if we have systems with a hodge-podge of competing drives (like a The Sims character) and learned behaviors, that in any given context may be more-or-less goal-directed, but w/o a well-specified over-arching utility function (just like any human or animal) — is that an AGI?
Is it about being superhuman at all tasks, rather than being superhuman at some and subhuman at others (even though there’s likely plenty of risk from advanced systems well before they’re superhuman at absolutely everything)?
Given all these ambiguities, I’m tempted to suggest we should in general taboo “AGI”, and use more specific phrases in its place. (Or at least, make a note of exactly which definition we’re using if we do refer to “AGI”.)
FWIW I put a little discussion of (part of) my own perspective here. I have definitely also noticed that using the term “AGI” without further elaboration has become a lot more problematic recently. :(
I use “AGI” to refer to autonomous ability to eventually bootstrap to the singularity (far future tech) without further nontrivial human assistance (apart from keeping the lights on and fixing out-of-memory bugs and such, if the AGI is initially too unskilled to do it on their own). The singularity is what makes AGI important, so that’s the natural defining condition. AGI in this sense is also the point when things start happening much faster.
Random reminder that the abilities listed here as lacking, but functionally very attractive to reproduce in AI (offline processing, short and long term memory, setting goals, thinking across contexts, generating novel and flexible rational solutions, internal loops), are abilities closely related to our current understanding of the evolutionary development of consciousness for problem solving in biological life. And that optimising for more human-like problem solving through pressure for results and random modifications comes with a still unclear risk of pushing AI down the same path to sentience. Sentience is a functional trait, we, and many other unrelated animals, have it for a reason, we need it to think the way we do, and have been unable to find a cheaper workaround, and it inevitably evolved multiple times without an intentional designer on this planet under problem solving pressure. It is no mystical or spiritual thing, it is a brain process that enables better behaviour. We do not understand why this path kept being taken in biological organisms, we do not understand if AI has an alternate path open, we are just chucking the same demand at it and letting it adapt to solve it.