Right, but I’m not sure if that’s a particularly important question to focus on. It is important in the sense that if an AI could do that, then it would definitely be an existential risk. But AI could also become a serious risk while having a very different kind of cognitive profile from humans. E.g. I’m currently unconvinced about short AI timelines—I thought the arguments for short timelines that people gave when I asked were pretty weak—and I expect that in the near future we’re more likely to get AIs that continue to have a roughly LLM-like cognitive profile.
And I also think it would be a mistake to conclude from this that existential risk from AI is in the near future is insignificant, since an “LLM-like intelligence” might still become very very powerful in some domains while staying vastly below the human level in others. But if people only focus on “when will we have AGI”, this point risks getting muddled, when it would be more important to discuss something to do “what capabilities do we expect AIs to have in the future, what tasks would those allow the AIs to do, and what kinds of actions would that imply”.
Do my two other comments [1, 2] clarify that?