This seems to be arguing against a starry-eyed idealist case for an “AI disarmament treaty”, but not really against a cynical/realistic case. (At first I was going to say “arguing against a strawman”, but no, there are in fact lots of starry-eyed idealists in alignment.)
Here’s my cynical/realistic case for an “AI disarmament treaty” (or something vaguely in that cluster) with China. As the post notes, the regulations mostly provide evidence that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. For purposes of an AI treaty, that’s plausibly all we need. Near-term AI is a potential threat to stability from the CCP’s perspective. That’s true whether the AI is built in China or somewhere else; American-built AI is still a threat to stability. So presumably the CCP would love for the US to adopt rules limiting new LLMs. If the US comes along and says “we’ll block training of big new AIs if you do”, the CCP’s answer is plausibly “yes definitely that sounds excellent”.
And sure, China won’t be working much on AI safety. That’s fine; the point of an “AI disarmament treaty” is not to get literally everyone working on safety. They don’t even need to be motivated by X-risk. If they’re willing to commit to not build big new AI, then it doesn’t really matter whether they’re doing that for the same reasons we want it.
Parity in AI isn’t what China is after—China doesn’t want to preserve the status quo. We want to win. We want AI hegemony. We want to be years ahead of the US in terms of our AIs. And frankly, we’re not that far behind—the recent Baidu LLMs perform somewhere between GPT2 and GPT3. To tie is to lose. Stopping the race now is the same as losing.
I also don’t see how LLMs can destabilize China in the near-term. Spam/propaganda isn’t a big issue since you need to submit your real-life ID in order to post on Chinese sites.
If that’s true, how do you explain the proposed guidelines that make it harder to train big models in China? The proposed guidelines suggest that the Cyberspace Administration of China believes that there are reasons that warrant slowing down LLM progress in China.
This seems to be arguing against a starry-eyed idealist case for an “AI disarmament treaty”, but not really against a cynical/realistic case. (At first I was going to say “arguing against a strawman”, but no, there are in fact lots of starry-eyed idealists in alignment.)
Here’s my cynical/realistic case for an “AI disarmament treaty” (or something vaguely in that cluster) with China. As the post notes, the regulations mostly provide evidence that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. For purposes of an AI treaty, that’s plausibly all we need. Near-term AI is a potential threat to stability from the CCP’s perspective. That’s true whether the AI is built in China or somewhere else; American-built AI is still a threat to stability. So presumably the CCP would love for the US to adopt rules limiting new LLMs. If the US comes along and says “we’ll block training of big new AIs if you do”, the CCP’s answer is plausibly “yes definitely that sounds excellent”.
And sure, China won’t be working much on AI safety. That’s fine; the point of an “AI disarmament treaty” is not to get literally everyone working on safety. They don’t even need to be motivated by X-risk. If they’re willing to commit to not build big new AI, then it doesn’t really matter whether they’re doing that for the same reasons we want it.
Parity in AI isn’t what China is after—China doesn’t want to preserve the status quo. We want to win. We want AI hegemony. We want to be years ahead of the US in terms of our AIs. And frankly, we’re not that far behind—the recent Baidu LLMs perform somewhere between GPT2 and GPT3. To tie is to lose. Stopping the race now is the same as losing.
I also don’t see how LLMs can destabilize China in the near-term. Spam/propaganda isn’t a big issue since you need to submit your real-life ID in order to post on Chinese sites.
If that’s true, how do you explain the proposed guidelines that make it harder to train big models in China? The proposed guidelines suggest that the Cyberspace Administration of China believes that there are reasons that warrant slowing down LLM progress in China.