Human level is not AGI. AGI is autonomous progress, potential to invent technology from distant future if given enough time.
The distinction is important for forecasting the singularity. If not developed further, modern AI never gets better on its own, while AGI eventually would, even if it’s not capable of direct self-improvement initially and needs a relatively long time to get there.
Humans/human level can invent technology from distant future if given enough time. Nick Bostrom has a whole chapter about speed intelligence in Superintelligence.
The point is that none of the capabilities in the particular forms they are expressed in humans are necessary for AGI, and large collections of them are insufficient, as it’s only the long term potential that’s important. Because of speed disparity with humans, autonomous progress is all it takes to gain any other specific capabilities relatively quickly. So pointing to human capabilities becoming readily available can be misleading, really shouldn’t be called AGI, even if it’s an OK argument that we are getting there.
Human level is not AGI. AGI is autonomous progress, potential to invent technology from distant future if given enough time.
The distinction is important for forecasting the singularity. If not developed further, modern AI never gets better on its own, while AGI eventually would, even if it’s not capable of direct self-improvement initially and needs a relatively long time to get there.
Humans/human level can invent technology from distant future if given enough time. Nick Bostrom has a whole chapter about speed intelligence in Superintelligence.
The point is that none of the capabilities in the particular forms they are expressed in humans are necessary for AGI, and large collections of them are insufficient, as it’s only the long term potential that’s important. Because of speed disparity with humans, autonomous progress is all it takes to gain any other specific capabilities relatively quickly. So pointing to human capabilities becoming readily available can be misleading, really shouldn’t be called AGI, even if it’s an OK argument that we are getting there.