I do think that there’s a reasonable possibility that there will be multiple not-fully-human-controlled AGIs competing against each other for various forms of power. I don’t think the specific scenario you outline seems like a particularly plausible way to get there. Also, I think humanity has a lot more leverage before that situation comes to pass, so I believe we will get more ‘expected value per unit of effort’ if we focus our safety planning on preventing ‘multiple poorly controlled AGIs competing’ rather than dealing with that.
I do think that there’s a reasonable possibility that there will be multiple not-fully-human-controlled AGIs competing against each other for various forms of power. I don’t think the specific scenario you outline seems like a particularly plausible way to get there. Also, I think humanity has a lot more leverage before that situation comes to pass, so I believe we will get more ‘expected value per unit of effort’ if we focus our safety planning on preventing ‘multiple poorly controlled AGIs competing’ rather than dealing with that.