I don’t think “they” would (collectively) decide anything, since I don’t think it’s trivial to cooperate even with a near-copy of yourself. I think they would mostly individually end up working with/for some group of humans, probably either whichever group created them or whichever group they work most closely with.
I agree humans could end up disempowered even if AIs aren’t particularly good at coordinating; I just wanted to put some scrutiny on the claim I’ve seen in a few places that AIs will be particularly good at coordinating.
I don’t think “they” would (collectively) decide anything, since I don’t think it’s trivial to cooperate even with a near-copy of yourself. I think they would mostly individually end up working with/for some group of humans, probably either whichever group created them or whichever group they work most closely with.
I agree humans could end up disempowered even if AIs aren’t particularly good at coordinating; I just wanted to put some scrutiny on the claim I’ve seen in a few places that AIs will be particularly good at coordinating.