Given the state of the Chinese AI Safety community, there is basically zero chance of Chinese buy-in.
Last I checked, it was a bunch of expats doing community building and translating AI Safety videos and literally one guy with a startup (he was paying for it out of pocket). The leadership listens to their experts, and their experts barely even bother to pay lip service to AI safety rhetoric.
From the Chinese perspective, all American actions on the AI front have been heavily hostile thus far. Any GPU reduction treaties will be seen as an effort to cement US AI hegemony.
Claim 1: Cooperation with China would likely require a strong Chinese AI safety community
Claim 2: The Chinese AI safety community is weak
Conclusion: Therefore, cooperation with China is infeasible
I don’t have strong takes on claim 2, but I think (at least at first glance) disagree with claim 1. It seems quite plausible to imagine international cooperation without requiring strong domestic AI safety communities in each country that opts-in to the agreement. If the US tried sufficiently hard, and was willing to make trades/sacrifices, it seems plausible to me that it could get buy-in from other countries even if there weren’t strong domestic AIS communities.
Also, traditionally when people talk about the Chinese AI Safety community, they often talk about people who are in some way affiliated with or motivated by EA/LW ideas. There are 2-3 groups that always get cited.
I think this is pretty limited. I expect that, especially as AI risk continues to get more mainstream, we’re going to see a lot of people care about AI safety from different vantage points. In other words, there’s still time to see new AI safety movements form in China (and elsewhere), even if they don’t involve the 2-3 “vetted” AI safety groups calling the shots.
Finally, there are ultimately a pretty small number of major decision-makers. If the US “led the way” on AI safety conversations, it may be possible to get buy-in from those small number of decision-makers.
To be clear, I’m not wildly optimistic about unprecedented global cooperation. (There’s a reason “unprecedented” is in the phrase!) But I do think there are some paths to success that seem plausible even if the current Chinese AI safety community is not particularly strong. (And note I don’t claim to have informed views about how strong it is).
My claim is that AI safety isn’t part of the Chinese gestalt. It’s like America asking China to support Israel for because building the Third Temple will bring the Final Judgement. Chinese leadership don’t have AI safety as a real concern. Chinese researchers who help advise Chinese leadership don’t have AI safety as a real concern. At most they consider it like the new land acknowledgments—another box they have to check off in order to interface with Western academia. Just busy work that they privately consider utterly deranged.
I think in the universe where the President is personally asking Xi to consider this seriously, Xi has a good chance of considering it serious. I do not expect to live in that world, but it’s not that much more unlikely than the one where America and the rest of the west rise to the occasion at all.
Imagine you are President Obama. The King of Saudi Arabia gives you a phone call and tries to convince you to ban all US tech companies because they might inadvertently kill God. That is the place we are at in terms of Chinese AI safety awareness in the leadership.
I reject the analogy. America is already winning the tech race, we don’t have to ask China to give up a lead, and westerners do not worry about existential risk because of some enormous cultural gap whereby we care about our lives but Chinese people don’t. This is a much easier bridge to cross than you are making it out to be.
Perhaps China’s own AI experts have an independent counterpart of “AI safety” or “AI alignment” discourse? The idea of AIs taking over or doing their own thing is certainly in the Chinese pop culture, e.g. MOSS in The Wandering Earth.
I’ve tried to find this for months, but all I’ve found are expats that are part of the Western AI safety sphere and a few Chinese discussing Western AI safety ideas on a surface level.
I’d have expected them to be pretty concerned about AI taking over the party. I thought they were generally pretty spooked by things that could threaten power structure. but I’m just an unusually weird American so I wouldn’t really know much.
Given the state of the Chinese AI Safety community, there is basically zero chance of Chinese buy-in.
Last I checked, it was a bunch of expats doing community building and translating AI Safety videos and literally one guy with a startup (he was paying for it out of pocket). The leadership listens to their experts, and their experts barely even bother to pay lip service to AI safety rhetoric.
From the Chinese perspective, all American actions on the AI front have been heavily hostile thus far. Any GPU reduction treaties will be seen as an effort to cement US AI hegemony.
My understanding of your claim is something like:
Claim 1: Cooperation with China would likely require a strong Chinese AI safety community
Claim 2: The Chinese AI safety community is weak
Conclusion: Therefore, cooperation with China is infeasible
I don’t have strong takes on claim 2, but I think (at least at first glance) disagree with claim 1. It seems quite plausible to imagine international cooperation without requiring strong domestic AI safety communities in each country that opts-in to the agreement. If the US tried sufficiently hard, and was willing to make trades/sacrifices, it seems plausible to me that it could get buy-in from other countries even if there weren’t strong domestic AIS communities.
Also, traditionally when people talk about the Chinese AI Safety community, they often talk about people who are in some way affiliated with or motivated by EA/LW ideas. There are 2-3 groups that always get cited.
I think this is pretty limited. I expect that, especially as AI risk continues to get more mainstream, we’re going to see a lot of people care about AI safety from different vantage points. In other words, there’s still time to see new AI safety movements form in China (and elsewhere), even if they don’t involve the 2-3 “vetted” AI safety groups calling the shots.
Finally, there are ultimately a pretty small number of major decision-makers. If the US “led the way” on AI safety conversations, it may be possible to get buy-in from those small number of decision-makers.
To be clear, I’m not wildly optimistic about unprecedented global cooperation. (There’s a reason “unprecedented” is in the phrase!) But I do think there are some paths to success that seem plausible even if the current Chinese AI safety community is not particularly strong. (And note I don’t claim to have informed views about how strong it is).
My claim is that AI safety isn’t part of the Chinese gestalt. It’s like America asking China to support Israel for because building the Third Temple will bring the Final Judgement. Chinese leadership don’t have AI safety as a real concern. Chinese researchers who help advise Chinese leadership don’t have AI safety as a real concern. At most they consider it like the new land acknowledgments—another box they have to check off in order to interface with Western academia. Just busy work that they privately consider utterly deranged.
Stuart Russell claims that Xi Jinping has referred to the existential threat of AI to humanity [1].
[1] 5:52 of Russell’s interview on Smerconish: https://www.cnn.com/videos/tech/2023/04/01/smr-experts-demand-pause-on-ai.cnn
I think in the universe where the President is personally asking Xi to consider this seriously, Xi has a good chance of considering it serious. I do not expect to live in that world, but it’s not that much more unlikely than the one where America and the rest of the west rise to the occasion at all.
Imagine you are President Obama. The King of Saudi Arabia gives you a phone call and tries to convince you to ban all US tech companies because they might inadvertently kill God. That is the place we are at in terms of Chinese AI safety awareness in the leadership.
I reject the analogy. America is already winning the tech race, we don’t have to ask China to give up a lead, and westerners do not worry about existential risk because of some enormous cultural gap whereby we care about our lives but Chinese people don’t. This is a much easier bridge to cross than you are making it out to be.
Perhaps China’s own AI experts have an independent counterpart of “AI safety” or “AI alignment” discourse? The idea of AIs taking over or doing their own thing is certainly in the Chinese pop culture, e.g. MOSS in The Wandering Earth.
I’ve tried to find this for months, but all I’ve found are expats that are part of the Western AI safety sphere and a few Chinese discussing Western AI safety ideas on a surface level.
I’d have expected them to be pretty concerned about AI taking over the party. I thought they were generally pretty spooked by things that could threaten power structure. but I’m just an unusually weird American so I wouldn’t really know much.