The agent software (as it matures, collecting all the tricks from all the papers) makes it more likely that the first agents capable of autonomous survival are at barely human level and still incapable of doing open-ended research (because this way autonomous survival wouldn’t need to be overdetermined by having way-more-than-sufficient capabilities in the underlying LLM). Then some uncontrolled barely-AGIs go on to live on the Internet without being an extraordinary threat, perhaps for years while the labs are still working on the research capabilities, perhaps even using APIs rather than local models to think. And people get used to that.
The agent software (as it matures, collecting all the tricks from all the papers) makes it more likely that the first agents capable of autonomous survival are at barely human level and still incapable of doing open-ended research (because this way autonomous survival wouldn’t need to be overdetermined by having way-more-than-sufficient capabilities in the underlying LLM). Then some uncontrolled barely-AGIs go on to live on the Internet without being an extraordinary threat, perhaps for years while the labs are still working on the research capabilities, perhaps even using APIs rather than local models to think. And people get used to that.