Even if we accept the premise that the first superhuman AGI won’t instantly kill all humans, an AGI that won’t kill all humans only due to practical limitations is definitely not safe.
I agree that totally wiping off humanity in a reliable way is a very difficult problem and not even a superintelligence could solve it in 5 minutes (probably). But I am still very much scared about a deceptively aligned AGI that secretly wants to kill all humans and can spend years in diabolical machinations after convincing everyone to be aligned.
Even if we accept the premise that the first superhuman AGI won’t instantly kill all humans, an AGI that won’t kill all humans only due to practical limitations is definitely not safe.
I agree that totally wiping off humanity in a reliable way is a very difficult problem and not even a superintelligence could solve it in 5 minutes (probably). But I am still very much scared about a deceptively aligned AGI that secretly wants to kill all humans and can spend years in diabolical machinations after convincing everyone to be aligned.
Then I agree with you