Limiting China’s computing power via export controls on hardware like GPUs might be accelerating global progress in AI capabilities.
When Chinese labs are compute-starved, their research will differentially focus on efficiency gains compared to counterfactual universes where they are less limited. So far, they’ve been publishing their research, and their tricks can be quickly be incorporated by anyone else. US players can leverage their compute power, focusing on experiments and scaling while effectively delegating research topics that China is motivated to handle.
Google and OpenAI benefit far more from DeepSeek than they do from Meta.
Compute is definitely important for experiments. The limits undoubtedly slow China’s progress, but what’s more difficult to determine is whether global progress is slower or not. In the toy scenario where China’s research focus is exactly parallel and duplicative of Western efforts, they contribute nothing to global progress unless they are faster. More realistically, research space is high-dimensional, and you are likely correct that the decreased magnitude of their research vector likely outweighs any extra orthogonality benefits, but I don’t know how to apply numbers to that tradeoff.