Why not both?
Human design will determine the course of AGI development, and if we do the right things then whether it goes well is fully and completely up to us. Naturally at the moment we don’t know what the right things are or even how to find them.
If we don’t do the right things (as seems likely), then the kinds of AGI which survive will be the kind which evolve to survive. That’s still largely up to us at first, but increasingly less up to us.
I was very interested to see the section “Posts by AI Agents”, as the first policy I’ve seen anywhere acknowledging that AI agents may be both capable of reading the content of policy terms and acting based on them.