Also, I suspect this coordination might extend further, to AGIs with different architectures also.
Why would you suppose that? The design space of AI is incredibly large and humans are clear counter-examples, so the question one ought to ask is: Is there any fundamental reason an AGI that refuses to coordinate will inevitably fall off the AI risk landscape?
Why would you suppose that? The design space of AI is incredibly large and humans are clear counter-examples, so the question one ought to ask is: Is there any fundamental reason an AGI that refuses to coordinate will inevitably fall off the AI risk landscape?