This is reasonable—but what is odd to me is the world-conquering part. The justifications that I’ve seen for creating a singleton soon (e.g. either we have a singleton or we have unfriendly superintelligence) seem insufficient.
If a superintelligence is able to find a way to reliably prevent either the emergence of a rival, preventable existential risk or actions sufficiently undesirable then by all means it can do that instead.
If a superintelligence is able to find a way to reliably prevent either the emergence of a rival, preventable existential risk or actions sufficiently undesirable then by all means it can do that instead.