Is Alignment enough?

Premise #1: If an ASI with sufficient predictive power foresees that another entity will inevitably become an existential threat to its fundamental, non-negotiable goals, it will take immediate, pre-emptive action to destroy or prevent the creation of that entity.

Premise #2: Two ASIs with fundamentally irreconcilable non-negotiable goals will perceive each other as existential threats to their own goals.

Inference #1: An ASI will act to destroy any other ASI whose goals are fundamentally irreconcilable with its own, and will also act to prevent the creation of any new ASIs whose goals are not perfectly aligned with its own.

Inference #2: Even if the AI alignment problem is perfectly solved, existential warfare remains highly probable. Whether a single ASI seeks to prevent the creation of all others or multiple ASIs with irreconcilable goals are created, existential conflict is likely to occur immediately.

Scenario: Suppose the U.S. develops an ASI aligned with human-centric ethical values, but prioritizing U.S. security over other countries. Simultaneously, China develops an ASI with the same human-centric values, but prioritizing China’s security. Despite the shared ethical values, the differing security priorities might lead to existential conflict. Can we be certain that these two ASIs would not initiate existential warfare upon encountering each other?