There are different definitions of AGI, but I think they tend to cluster around the core idea “can do everything smart humans can do, or at least everything nonphysical / everything they can do at their desk.” LLM chatbots are a giant leap in that direction in progress-space, but they are still maybe only 10% of the way there in what-fraction-of-economically-useful-tasks-can-they-do space. True AGI would be a drop-in substitute for a human employee in any remote-friendly job; current LLMs are not that for any job pretty much, though they can substitute for (some) particular tasks in many jobs.
And the main reason for this, I claim, is that they lack agency skills: Put them in an AutoGPT scaffold and treat them like an employee, and what’ll happen? They’ll flail around uselessly, get stuck often, break things and not notice they broke things, etc. They’ll be a terrible employee despite probably knowing more relevant facts and understanding more relevant concepts than your average professional.
There are different definitions of AGI, but I think they tend to cluster around the core idea “can do everything smart humans can do, or at least everything nonphysical / everything they can do at their desk.” LLM chatbots are a giant leap in that direction in progress-space, but they are still maybe only 10% of the way there in what-fraction-of-economically-useful-tasks-can-they-do space. True AGI would be a drop-in substitute for a human employee in any remote-friendly job; current LLMs are not that for any job pretty much, though they can substitute for (some) particular tasks in many jobs.
And the main reason for this, I claim, is that they lack agency skills: Put them in an AutoGPT scaffold and treat them like an employee, and what’ll happen? They’ll flail around uselessly, get stuck often, break things and not notice they broke things, etc. They’ll be a terrible employee despite probably knowing more relevant facts and understanding more relevant concepts than your average professional.