The definition you quoted is “a machine capable of behaving intelligently over many domains.”
It seems to me like existing AI systems have this feature. Is the argument that ChatGPT doesn’t behave intelligently, or that it doesn’t do so over “many” domains? Either way, if you are using this definition, then saying “AGI has a significant probability of happening in 5 years” doesn’t seem very interesting and mostly comes down to a semantic question.
I think it is sometimes used within a worldview where “general intelligence” is a discrete property, and AGI is something with that property. It is sometimes used to refer to AI that can do more or less everything a human can do. I have no idea what the OP means by the term.
My own view is that “AGI company” or “AGI researchers” makes some sense as a way to pick out some particular companies or people, but talking about AGI as a point in time or a specific technical achievement seems unhelpfully vague.
A sufficiently capable AGI will be transformative by default, for better or worse, and an insufficiently capable, but nonetheless fully-general AI is probably a transformative AI in embryo, so the terms have been used synonymously. The fact that we feel the need to make this distinction with current AIs is worrisome.
Current large language models have become impressively general, but I think they are not as general as humans yet, but maybe that’s more a question of capability level than generality level and some of our current AIs are already AGIs as you imply. I’m not sure. (I haven’t talked to Bing’s new AI yet, only ChatGPT.)
The definition you quoted is “a machine capable of behaving intelligently over many domains.”
It seems to me like existing AI systems have this feature. Is the argument that ChatGPT doesn’t behave intelligently, or that it doesn’t do so over “many” domains? Either way, if you are using this definition, then saying “AGI has a significant probability of happening in 5 years” doesn’t seem very interesting and mostly comes down to a semantic question.
I think it is sometimes used within a worldview where “general intelligence” is a discrete property, and AGI is something with that property. It is sometimes used to refer to AI that can do more or less everything a human can do. I have no idea what the OP means by the term.
My own view is that “AGI company” or “AGI researchers” makes some sense as a way to pick out some particular companies or people, but talking about AGI as a point in time or a specific technical achievement seems unhelpfully vague.
I think you’re contrasting AGI with Transformative AI
A sufficiently capable AGI will be transformative by default, for better or worse, and an insufficiently capable, but nonetheless fully-general AI is probably a transformative AI in embryo, so the terms have been used synonymously. The fact that we feel the need to make this distinction with current AIs is worrisome.
Current large language models have become impressively general, but I think they are not as general as humans yet, but maybe that’s more a question of capability level than generality level and some of our current AIs are already AGIs as you imply. I’m not sure. (I haven’t talked to Bing’s new AI yet, only ChatGPT.)