A possible risk of some US AI regulation is that US regulation would differentially slow US AI progress and that would be bad. This post explores the factors that determine how much US regulation would differentially slow US AI progress and how bad that would be.
Note that the differentially slowing US problem only applies to regulation that slows US AI progress (toward powerful/dangerous systems), such as strong regulation of large training runs. The US can do things like facilitate incident reporting and clarify AI labs’ liability for harms without slowing domestic AI progress, and some regulation (especially restricting the publication of AI research and sharing of model weights) would differentially slow foreign AI progress!
Note that international coordination on AI safety mostly avoids this problem.
Cruxes
If I was making a model of the differentially slowing US problem, these would be its factors.
(Here “China” often can mean any foreign state. Actual-China seems most relevant because it’s well-positioned to lead on AI in worlds where strong US regulation slows US AI progress.)
How much would US regulation differentially slow US AI progress? (This and its subcruxes depend on the details of the regulation.)
To what extent does the regulation (or legible-safety in general) slow progress?[1]
In the abstract?
To what extent is leading labs’ behavior already congruent with the regulation?[2]
Would China voluntarily follow US regulation?[3] Would the US be able to extraterritorialize its regulation effectively?
To what extent does US AI progress boost Chinese AI progress (via e.g. publishing research or leaking insights)?
How bad would differentially slowing US AI progress be?
How bad would it be directly?
How far behind is China; how long would it take to catch up (after pricing in the possibility of relevant future events like stronger US export controls)?
How far behind is China now?
How effectively will US/allies deny compute to China?[4]
How strong will US/allied export controls be?
How effectively will they be enforced?
Will US/allies restrict access to cloud compute for AI training? (How effectively?)
Will China attempt to obtain compute illegally? (How effectively?)
At a given level of compute access, how committed is China to frontier AI?
How much safer are leading US labs than leading Chinese labs?[5]
How much safer is it if there are fewer labs at the frontier, and for those labs to be located in fewer jurisdictions?
How much worse is it for China to control powerful AI?[6]
Can a US lab effectively move to China to evade US regulation? [Recall that “China” includes e.g. Canada.]
How bad would it be indirectly?
Would the US reverse its regulation because it’s losing its lead? (This depends on the particular regulation.)
How bad would that be? (This depends on the particular regulation.)
Is there an opportunity cost of spending US lead time now? (How big is it?)
Would strong regulation happen later?
Would strong regulation that only came into force later be better? (How much better?)
How much does the US lead help coordination between leading labs?
Other effects
Effect on diplomacy (increasing or decreasing China’s inclination to join an international agreement)
Two questions that seem particularly important are extraterritoriality and the “effectively move” question. I suspect some people have a good sense of the extent to which AI regulation would be extraterritorialized and what that depends on, and some people have a good sense of the extent to which labs can effectively hop regulatory jurisdictions and what that depends on. If you know, please let me know!
The US government should do speed-orthogonal safety stuff (e.g. facilitating safety features on hardware, liability, training run reporting, incident reporting). The US government should slow foreign progress (e.g. restricting publishing research, restricting sharing research artifacts like model weights, doing export controls, and security standards). My guess is that the US government should avoid slowing leading labs much; things that would change my mind include foreign labs seeming further behind than I currently believe or leading labs seeming less (relatively) safe than I currently believe.
Thanks to two people for discussing some of these ideas with me.
To the extent that leading labs are already doing what a regulation would require, the regulation doesn’t slow US AI progress, but it doesn’t improve safety much either. (It would have the minor positive effects of requiring less cautious labs to be safer, preventing leading labs from becoming much less safe, and maybe causing future regulation to be more productive.)
Cruxes on US lead for some domestic AI regulation
Written quickly. Suggestions welcome.
A possible risk of some US AI regulation is that US regulation would differentially slow US AI progress and that would be bad. This post explores the factors that determine how much US regulation would differentially slow US AI progress and how bad that would be.
Note that the differentially slowing US problem only applies to regulation that slows US AI progress (toward powerful/dangerous systems), such as strong regulation of large training runs. The US can do things like facilitate incident reporting and clarify AI labs’ liability for harms without slowing domestic AI progress, and some regulation (especially restricting the publication of AI research and sharing of model weights) would differentially slow foreign AI progress!
Note that international coordination on AI safety mostly avoids this problem.
Cruxes
If I was making a model of the differentially slowing US problem, these would be its factors.
(Here “China” often can mean any foreign state. Actual-China seems most relevant because it’s well-positioned to lead on AI in worlds where strong US regulation slows US AI progress.)
How much would US regulation differentially slow US AI progress? (This and its subcruxes depend on the details of the regulation.)
To what extent does the regulation (or legible-safety in general) slow progress?[1]
In the abstract?
To what extent is leading labs’ behavior already congruent with the regulation?[2]
Would China voluntarily follow US regulation?[3] Would the US be able to extraterritorialize its regulation effectively?
To what extent does US AI progress boost Chinese AI progress (via e.g. publishing research or leaking insights)?
How bad would differentially slowing US AI progress be?
How bad would it be directly?
How far behind is China; how long would it take to catch up (after pricing in the possibility of relevant future events like stronger US export controls)?
How far behind is China now?
How effectively will US/allies deny compute to China?[4]
How strong will US/allied export controls be?
How effectively will they be enforced?
Will US/allies restrict access to cloud compute for AI training? (How effectively?)
Will China attempt to obtain compute illegally? (How effectively?)
At a given level of compute access, how committed is China to frontier AI?
How much safer are leading US labs than leading Chinese labs?[5]
How much safer is it if there are fewer labs at the frontier, and for those labs to be located in fewer jurisdictions?
How much worse is it for China to control powerful AI?[6]
Can a US lab effectively move to China to evade US regulation? [Recall that “China” includes e.g. Canada.]
How bad would it be indirectly?
Would the US reverse its regulation because it’s losing its lead? (This depends on the particular regulation.)
How bad would that be? (This depends on the particular regulation.)
Is there an opportunity cost of spending US lead time now? (How big is it?)
Would strong regulation happen later?
Would strong regulation that only came into force later be better? (How much better?)
How much does the US lead help coordination between leading labs?
Other effects
Effect on diplomacy (increasing or decreasing China’s inclination to join an international agreement)
Two questions that seem particularly important are extraterritoriality and the “effectively move” question. I suspect some people have a good sense of the extent to which AI regulation would be extraterritorialized and what that depends on, and some people have a good sense of the extent to which labs can effectively hop regulatory jurisdictions and what that depends on. If you know, please let me know!
The US government should do speed-orthogonal safety stuff (e.g. facilitating safety features on hardware, liability, training run reporting, incident reporting). The US government should slow foreign progress (e.g. restricting publishing research, restricting sharing research artifacts like model weights, doing export controls, and security standards). My guess is that the US government should avoid slowing leading labs much; things that would change my mind include foreign labs seeming further behind than I currently believe or leading labs seeming less (relatively) safe than I currently believe.
Thanks to two people for discussing some of these ideas with me.
Enforcing some best practices for safety wouldn’t really hurt speed. Some important regulation would.
To the extent that leading labs are already doing what a regulation would require, the regulation doesn’t slow US AI progress, but it doesn’t improve safety much either. (It would have the minor positive effects of requiring less cautious labs to be safer, preventing leading labs from becoming much less safe, and maybe causing future regulation to be more productive.)
My impression: very unlikely.
Or deny talent, but that seems less important.
My impression: a lot.
This seems less important than safety, but my impression is: moderately.