“Either they’re perfectly doable by humans in the present, with no AGI help necessary.”
So, your argument about why this is a relevant statement is that AI isn’t adding danger? That seems to me to be using a really odd standard for “perfectly doable” .. the actual number of humans who could do those things is not huge, and humans don’t usually want to.
Like either ending the world is easy for humans, in which AI is dangerous because it will want to, or its hard for humans in which case AI is dangerous because it will do them better.
I don’t think that works to dismiss that category of risk.
“Either they’re perfectly doable by humans in the present, with no AGI help necessary.”
So, your argument about why this is a relevant statement is that AI isn’t adding danger? That seems to me to be using a really odd standard for “perfectly doable” .. the actual number of humans who could do those things is not huge, and humans don’t usually want to.
Like either ending the world is easy for humans, in which AI is dangerous because it will want to, or its hard for humans in which case AI is dangerous because it will do them better.
I don’t think that works to dismiss that category of risk.