If we have two distinct AI safety plans, the researchers are sensible to have a big discussion on which is better and only turn that one on. If not, and neither AI is fatally flawed, I would expect them to cooperate, they have very similar goals and neither wants war.
Historically, it didn’t work. A most bitter conflicts were between two main branches of, say, Islam, - Shia and Sunni, or between different socialists groups. We could hope though that rationalists are better in cooperation.
If we have two distinct AI safety plans, the researchers are sensible to have a big discussion on which is better and only turn that one on. If not, and neither AI is fatally flawed, I would expect them to cooperate, they have very similar goals and neither wants war.
Historically, it didn’t work. A most bitter conflicts were between two main branches of, say, Islam, - Shia and Sunni, or between different socialists groups. We could hope though that rationalists are better in cooperation.