By my definition of the word, that would be the point at which we’re either dead or we’ve won, so I expect it to be pretty noticeable on many dimensions. Specific examples vary based on the context, like with language models I would think we have AGI if it could simulate a deceptive simulacrum with the ability to do long-horizon planning and that was high-fidelity enough to do something dangerous (entirely autonomously without being driven toward this after a seed prompt) like upload its weights onto a private server it controls, or successfully acquire resources on the internet.
I know that there are other definitions people use however, and under some of them I would count GPT-3 as a weak AGI and Bing/GPT-4 as being slightly stronger. I don’t find those very useful definitions though, because then we don’t have as clear and evocative a term for the point at which model capabilities become dangerous.
By my definition of the word, that would be the point at which we’re either dead or we’ve won, so I expect it to be pretty noticeable on many dimensions. Specific examples vary based on the context, like with language models I would think we have AGI if it could simulate a deceptive simulacrum with the ability to do long-horizon planning and that was high-fidelity enough to do something dangerous (entirely autonomously without being driven toward this after a seed prompt) like upload its weights onto a private server it controls, or successfully acquire resources on the internet.
I know that there are other definitions people use however, and under some of them I would count GPT-3 as a weak AGI and Bing/GPT-4 as being slightly stronger. I don’t find those very useful definitions though, because then we don’t have as clear and evocative a term for the point at which model capabilities become dangerous.