This may be too late to get an object level response.
But I think there’s a critical variable missing from this analysis that pushes timeline much closer.
Criticality. By criticality I mean it in the sense that as you get a nuclear pile closer to a critical mass, activity rises and as you cross a threshold of gain it rises exponentially.
In the AGI case, we have a clear and obvious feedback mechanism for AGI.
Narrow AIs can propose AI architectures that may scale to AGI, they can help us design the chips they execute on, discover faster the lithography settings to make smaller transistors, translate programming languages to from one slow language to a faster one, write code modules, and later on drive the robots to mass-manufacture compute nodes at colossal scales.
In addition as you begin to approach AGI you can have tasks in your AGI benchmark suite to “design a narrow AI that can design AGIs” (indirect approach” or “design a better AGI that will do well on this bench”.
This may be too late to get an object level response.
But I think there’s a critical variable missing from this analysis that pushes timeline much closer.
Criticality. By criticality I mean it in the sense that as you get a nuclear pile closer to a critical mass, activity rises and as you cross a threshold of gain it rises exponentially.
In the AGI case, we have a clear and obvious feedback mechanism for AGI.
Narrow AIs can propose AI architectures that may scale to AGI, they can help us design the chips they execute on, discover faster the lithography settings to make smaller transistors, translate programming languages to from one slow language to a faster one, write code modules, and later on drive the robots to mass-manufacture compute nodes at colossal scales.
In addition as you begin to approach AGI you can have tasks in your AGI benchmark suite to “design a narrow AI that can design AGIs” (indirect approach” or “design a better AGI that will do well on this bench”.