I have pointed this out to folks in the context of AI timelines: metaculus gives predictions for “weakly AGI” but I consider hypothetical GATO-x which can generalize to a task outside it’s training distribution or many tasks outside it’s training distribution to be AGI, yet a considerable way from an AGI with enough agency to act on its own.
OTOH it isn’t so much reassurance if bootstrapping this thing up to agency with as little as a batch script to keep it running will make it agentic.
But the time between weak AGI and agentic AGI is a prime learning opportunity and the lesson is we should do everything we can to prolong the length of the time between them once weak AGI is invented.
Also, perhaps someone should study the necessary components for an AGI takeover by simulating agent behavior in a toy model. At the least you need a degree of agency, probably a self model in order to recursively self-improve, and the ability to generalize. Knowing what the necessary components are might enable us to take steps to avoid having them in once system all at once.
If anyone has ever demonstrated, or even systematically described, what those necessary components are, I haven’t seen it done. Maybe it is an infohazard but it also seems like necessary information to coordinate around.
I have pointed this out to folks in the context of AI timelines: metaculus gives predictions for “weakly AGI” but I consider hypothetical GATO-x which can generalize to a task outside it’s training distribution or many tasks outside it’s training distribution to be AGI, yet a considerable way from an AGI with enough agency to act on its own.
OTOH it isn’t so much reassurance if bootstrapping this thing up to agency with as little as a batch script to keep it running will make it agentic.
But the time between weak AGI and agentic AGI is a prime learning opportunity and the lesson is we should do everything we can to prolong the length of the time between them once weak AGI is invented.
Also, perhaps someone should study the necessary components for an AGI takeover by simulating agent behavior in a toy model. At the least you need a degree of agency, probably a self model in order to recursively self-improve, and the ability to generalize. Knowing what the necessary components are might enable us to take steps to avoid having them in once system all at once.
If anyone has ever demonstrated, or even systematically described, what those necessary components are, I haven’t seen it done. Maybe it is an infohazard but it also seems like necessary information to coordinate around.
Yes, in this interview, Connor Leahy said he has an idea of what these components are, but he wouldn’t tell publicly.