I agree with the broader claim that as AGI approaches, governments are likely to intervene drastically to deal with national security threats.
However, I’m not so sure about the “therefore a global arms race will start” claim. I think it’s pretty plausible that if the US or UK are the first to approach AGI, that they would come to their senses and institute a global pause instead of spearheading an arms race. Although maybe that’s wishful thinking on my part.
I expect some people in the government to be like “wait, if a global arms race starts this is likely to end in catastrophe” and advocate for a pause instead. I think the US would be pretty happy with an enforcable pause if this meant it got to maintain a slight lead. I’d hope that (pause+slight lead) would be much more inticing than (race+large lead) given the catastrophic risk associated with the latter.
I agree actually. I think the three most likely branches of the future are:
a) a strong and well-enforced international treaty between all major powers which no country can opt out of, that successfully controls AI development and deployment.
b) a race-to-the-bottom of arms races, and first strikes, and increasing relative breakdowns in state monopoly on force as AI grants novel uncontrolled weapons tech to smaller groups (e.g. terrorist orgs).
c) (least likely, but still possible) a sudden jump in power of AI, which allows the controlling entity to seize power of the entire world in such a short time that state actors cannot react or stop them.
I agree with the broader claim that as AGI approaches, governments are likely to intervene drastically to deal with national security threats.
However, I’m not so sure about the “therefore a global arms race will start” claim. I think it’s pretty plausible that if the US or UK are the first to approach AGI, that they would come to their senses and institute a global pause instead of spearheading an arms race. Although maybe that’s wishful thinking on my part.
I expect some people in the government to be like “wait, if a global arms race starts this is likely to end in catastrophe” and advocate for a pause instead. I think the US would be pretty happy with an enforcable pause if this meant it got to maintain a slight lead. I’d hope that (pause+slight lead) would be much more inticing than (race+large lead) given the catastrophic risk associated with the latter.
I agree actually. I think the three most likely branches of the future are:
a) a strong and well-enforced international treaty between all major powers which no country can opt out of, that successfully controls AI development and deployment.
b) a race-to-the-bottom of arms races, and first strikes, and increasing relative breakdowns in state monopoly on force as AI grants novel uncontrolled weapons tech to smaller groups (e.g. terrorist orgs).
c) (least likely, but still possible) a sudden jump in power of AI, which allows the controlling entity to seize power of the entire world in such a short time that state actors cannot react or stop them.