What kinds of conflicts are you envisioning?
I think if the argument is something along the lines of “maybe at some point other countries will demand that the US stop AI progress”, then from the perspective of the USG, I think it’s sensible to operate under the perspective of “OK so we need to advance AI progress as much as possible and try to hide some of it, and if at some future time other countries are threatening us we need to figure out how to respond.” But I don’t think it justifies anything like “we should pause or start initiating international agreements.”
(Separately, whether or not it’s “truer” depends a lot on one’s models of AGI development. Most notably: (a) how likely is misalignment and (b) how slow will takeoff be//will it be very obvious to other nations that super advanced AI is about to be developed, and (c) how will governments and bureaucracies react and will they be able to react quickly enough.)
(Also separately– I do think more people should be thinking about how these international dynamics might play out & if there’s anything we can be doing to prepare for them. I just don’t think they naturally lead to a “oh, so we should be internationally coordinating” mentality and instead lead to much more of a “we can do whatever we want unless/until other countries get mad at us & we should probably do things more secretly” mentality.)
Thanks for spelling it out. I agree that more people should think about these scenarios. I could see something like this triggering central international coordination (or conflict).
(I still don’t think this would trigger the USG to take different actions in the near-term, except perhaps “try to be more secret about AGI development” and maybe “commission someone to do some sort of study or analysis on how we would handle these kinds of dynamics & what sorts of international proposals would advance US interests while preventing major conflict.” The second thing is a bit optimistic but maybe plausible.)