Any attempts at regulation are clearly pointless without global policing. And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it. These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it. For AI they’ll continue development in hidden or even mobile facilities.
It would require a convincing threat of nuclear escalation or possibly a rock-solid economic blockade/embargo of non-transparent regimes to make the necessary all-the-time-and-everywhere access compliance policing happen. The political will for these options is nil. No Politician wants to escalate tensions, they are highly risk averse, and they are not going to be able to sell that to the public in a democracy. Our democracies do not select for leaders that would be capable of holding such a line.
So without instituting liberal democracies everywhere with leadership that genuinely puts humanities future ahead of their national interests this line of seeking to slow development to research alignment via regulation seems rather pointless.
Humanity needs a colossal but survivable AI scare ASAP if it is to develop the collective will to effectively regulate AI development, and not sleep walk its way off the cliff edge of extinction as seems to be our current lazy and disinterested path.
I think politically realistic hardware controls could buy significant time, or be used to push other jurisdictions to implement appropriate regulation and allow for international verification if they want access to hardware. This seems increasingly plausible given the United States’ apparent willingness to try to control access to hardware (e.g. see here).
The parallel to the nuclear case doesn’t work: Successfully building nuclear weapons is to China’s advantage. Successfully building a dangerously misaligned AI is not. (not in national, party, nor personal interest)
The clear path to regulation working with China is to get them to realize the scale of the risk—and that the risk applies even if only they continue rushing forward.
It’s not an easy path, but it’s not obvious that convincing China that going forward is foolish is any harder than convincing the US, UK....
Conditional on international buy-in on the risk, the game theory looks very different from the nuclear case. (granted, it’s also worse in some ways, since the upsides of [defecting-and-getting-lucky] are much higher)
Any attempts at regulation are clearly pointless without global policing. And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it. These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it. For AI they’ll continue development in hidden or even mobile facilities.
It would require a convincing threat of nuclear escalation or possibly a rock-solid economic blockade/embargo of non-transparent regimes to make the necessary all-the-time-and-everywhere access compliance policing happen. The political will for these options is nil. No Politician wants to escalate tensions, they are highly risk averse, and they are not going to be able to sell that to the public in a democracy. Our democracies do not select for leaders that would be capable of holding such a line.
So without instituting liberal democracies everywhere with leadership that genuinely puts humanities future ahead of their national interests this line of seeking to slow development to research alignment via regulation seems rather pointless.
Humanity needs a colossal but survivable AI scare ASAP if it is to develop the collective will to effectively regulate AI development, and not sleep walk its way off the cliff edge of extinction as seems to be our current lazy and disinterested path.
I think politically realistic hardware controls could buy significant time, or be used to push other jurisdictions to implement appropriate regulation and allow for international verification if they want access to hardware. This seems increasingly plausible given the United States’ apparent willingness to try to control access to hardware (e.g. see here).
The parallel to the nuclear case doesn’t work:
Successfully building nuclear weapons is to China’s advantage.
Successfully building a dangerously misaligned AI is not. (not in national, party, nor personal interest)
The clear path to regulation working with China is to get them to realize the scale of the risk—and that the risk applies even if only they continue rushing forward.
It’s not an easy path, but it’s not obvious that convincing China that going forward is foolish is any harder than convincing the US, UK....
Conditional on international buy-in on the risk, the game theory looks very different from the nuclear case.
(granted, it’s also worse in some ways, since the upsides of [defecting-and-getting-lucky] are much higher)