The fact that an AI arms race would be extremely bad does not imply that rising global authoritarianism is not worth worrying about (and vice versa)
I am someone who is worried both about AI risks (from loss of control, and from war and misuse/structural risks) and from what seems to be a ‘new axis’ of authoritarian threats cooperating in unprecedented ways.
I won’t reiterate all the evidence here, but these two pieces and their linked sources should suffice:
Despite believing this thesis, I am not, on current evidence, in favor of aggressive efforts to “race and beat China” in AI, or for abandoning attempts to slow an AGI race. I think on balance it is still worth trying these kinds of cooperation, while being clear eyed about the threats we face. I do think that there are possible worlds where, regretfully and despite the immense dangers, there is no other option but to race. I don’t think that we are in such a world as of yet.
However, I notice that many of the people who agree with me that an AI arms race would be very bad and that we should avoid it tend to diminish the risks of global authoritarianism or the difference between the west and its enemies, and very few seem to buy into the above thesis that there is a dangerous interconnected web of authoritarian states with common interests developing.
Similarly, most of the people who see the authoritrian threat which has emerged into clear sight over the last few years (from China, Russia, Iran, North Korea and similar actors) want to respond by racing and think alignment will not be too difficult. This includes the leaders of many AI companies who may have their own less patriotic reasons for pushing such an agenda.
I think this implicit correlation should be called out as a mistake.
As a matter of simple logic, how dangerous frantic AGI development is, and how hostile foreign adversaries are, are two unrelated variables which shouldn’t correlate.
In my mind, the following are all true:
An AI arms race would be extraordinarily dangerous, drastically raise the chance of nuclear war, and also probably raise the chance of loss of control of AGI leading to human extinction or of destructive misuse. It’s well worth trying hard to avoid AI arms races, even if our adversaries are genuinely dangerous and we won’t cooperate with them in general on other matters, even if the prospects seem dim.
it is clearly much better that democratic societies have control of an AGI singleton than non-democratic countries like China, if those are the options. And, given current realities, there is a chance that an arms race is inevitable no matter how dangerous it is. If an arms race is inevitable, and transformative AI will do what we want, it is much better that the western democratic world is leading instead of authoritarian countries, especially if it is also developing AI under safer and more controlled conditions (which seems likely to me)
I basically never see these 3 acknowledged all at once. We either see (1) and (3) grouped together or (2) alone. I’m not sure what the best AI governance strategy to adopt is, but an analysis should start with a clear eyed understanding of the international situation and what values matter.
The fact that an AI arms race would be extremely bad does not imply that rising global authoritarianism is not worth worrying about (and vice versa)
I am someone who is worried both about AI risks (from loss of control, and from war and misuse/structural risks) and from what seems to be a ‘new axis’ of authoritarian threats cooperating in unprecedented ways.
I won’t reiterate all the evidence here, but these two pieces and their linked sources should suffice:
https://www.noahpinion.blog/p/the-free-world-teeters-on-the-edge
https://www.theatlantic.com/politics/archive/2024/10/us-election-wartime-president/680326/
Despite believing this thesis, I am not, on current evidence, in favor of aggressive efforts to “race and beat China” in AI, or for abandoning attempts to slow an AGI race. I think on balance it is still worth trying these kinds of cooperation, while being clear eyed about the threats we face. I do think that there are possible worlds where, regretfully and despite the immense dangers, there is no other option but to race. I don’t think that we are in such a world as of yet.
However, I notice that many of the people who agree with me that an AI arms race would be very bad and that we should avoid it tend to diminish the risks of global authoritarianism or the difference between the west and its enemies, and very few seem to buy into the above thesis that there is a dangerous interconnected web of authoritarian states with common interests developing.
Similarly, most of the people who see the authoritrian threat which has emerged into clear sight over the last few years (from China, Russia, Iran, North Korea and similar actors) want to respond by racing and think alignment will not be too difficult. This includes the leaders of many AI companies who may have their own less patriotic reasons for pushing such an agenda.
I think this implicit correlation should be called out as a mistake.
As a matter of simple logic, how dangerous frantic AGI development is, and how hostile foreign adversaries are, are two unrelated variables which shouldn’t correlate.
In my mind, the following are all true:
An AI arms race would be extraordinarily dangerous, drastically raise the chance of nuclear war, and also probably raise the chance of loss of control of AGI leading to human extinction or of destructive misuse. It’s well worth trying hard to avoid AI arms races, even if our adversaries are genuinely dangerous and we won’t cooperate with them in general on other matters, even if the prospects seem dim.
it is clearly much better that democratic societies have control of an AGI singleton than non-democratic countries like China, if those are the options. And, given current realities, there is a chance that an arms race is inevitable no matter how dangerous it is. If an arms race is inevitable, and transformative AI will do what we want, it is much better that the western democratic world is leading instead of authoritarian countries, especially if it is also developing AI under safer and more controlled conditions (which seems likely to me)
If alignment isn’t solvable or if the offense-defense balance is unfavorable, then it doesn’t matter who develops AGI as it is a suicide race. But we don’t know if that is the case as of yet.
I basically never see these 3 acknowledged all at once. We either see (1) and (3) grouped together or (2) alone. I’m not sure what the best AI governance strategy to adopt is, but an analysis should start with a clear eyed understanding of the international situation and what values matter.