Claim 1: Cooperation with China would likely require a strong Chinese AI safety community
Claim 2: The Chinese AI safety community is weak
Conclusion: Therefore, cooperation with China is infeasible
I don’t have strong takes on claim 2, but I think (at least at first glance) disagree with claim 1. It seems quite plausible to imagine international cooperation without requiring strong domestic AI safety communities in each country that opts-in to the agreement. If the US tried sufficiently hard, and was willing to make trades/sacrifices, it seems plausible to me that it could get buy-in from other countries even if there weren’t strong domestic AIS communities.
Also, traditionally when people talk about the Chinese AI Safety community, they often talk about people who are in some way affiliated with or motivated by EA/LW ideas. There are 2-3 groups that always get cited.
I think this is pretty limited. I expect that, especially as AI risk continues to get more mainstream, we’re going to see a lot of people care about AI safety from different vantage points. In other words, there’s still time to see new AI safety movements form in China (and elsewhere), even if they don’t involve the 2-3 “vetted” AI safety groups calling the shots.
Finally, there are ultimately a pretty small number of major decision-makers. If the US “led the way” on AI safety conversations, it may be possible to get buy-in from those small number of decision-makers.
To be clear, I’m not wildly optimistic about unprecedented global cooperation. (There’s a reason “unprecedented” is in the phrase!) But I do think there are some paths to success that seem plausible even if the current Chinese AI safety community is not particularly strong. (And note I don’t claim to have informed views about how strong it is).
My claim is that AI safety isn’t part of the Chinese gestalt. It’s like America asking China to support Israel for because building the Third Temple will bring the Final Judgement. Chinese leadership don’t have AI safety as a real concern. Chinese researchers who help advise Chinese leadership don’t have AI safety as a real concern. At most they consider it like the new land acknowledgments—another box they have to check off in order to interface with Western academia. Just busy work that they privately consider utterly deranged.
I think in the universe where the President is personally asking Xi to consider this seriously, Xi has a good chance of considering it serious. I do not expect to live in that world, but it’s not that much more unlikely than the one where America and the rest of the west rise to the occasion at all.
Imagine you are President Obama. The King of Saudi Arabia gives you a phone call and tries to convince you to ban all US tech companies because they might inadvertently kill God. That is the place we are at in terms of Chinese AI safety awareness in the leadership.
I reject the analogy. America is already winning the tech race, we don’t have to ask China to give up a lead, and westerners do not worry about existential risk because of some enormous cultural gap whereby we care about our lives but Chinese people don’t. This is a much easier bridge to cross than you are making it out to be.
My understanding of your claim is something like:
Claim 1: Cooperation with China would likely require a strong Chinese AI safety community
Claim 2: The Chinese AI safety community is weak
Conclusion: Therefore, cooperation with China is infeasible
I don’t have strong takes on claim 2, but I think (at least at first glance) disagree with claim 1. It seems quite plausible to imagine international cooperation without requiring strong domestic AI safety communities in each country that opts-in to the agreement. If the US tried sufficiently hard, and was willing to make trades/sacrifices, it seems plausible to me that it could get buy-in from other countries even if there weren’t strong domestic AIS communities.
Also, traditionally when people talk about the Chinese AI Safety community, they often talk about people who are in some way affiliated with or motivated by EA/LW ideas. There are 2-3 groups that always get cited.
I think this is pretty limited. I expect that, especially as AI risk continues to get more mainstream, we’re going to see a lot of people care about AI safety from different vantage points. In other words, there’s still time to see new AI safety movements form in China (and elsewhere), even if they don’t involve the 2-3 “vetted” AI safety groups calling the shots.
Finally, there are ultimately a pretty small number of major decision-makers. If the US “led the way” on AI safety conversations, it may be possible to get buy-in from those small number of decision-makers.
To be clear, I’m not wildly optimistic about unprecedented global cooperation. (There’s a reason “unprecedented” is in the phrase!) But I do think there are some paths to success that seem plausible even if the current Chinese AI safety community is not particularly strong. (And note I don’t claim to have informed views about how strong it is).
My claim is that AI safety isn’t part of the Chinese gestalt. It’s like America asking China to support Israel for because building the Third Temple will bring the Final Judgement. Chinese leadership don’t have AI safety as a real concern. Chinese researchers who help advise Chinese leadership don’t have AI safety as a real concern. At most they consider it like the new land acknowledgments—another box they have to check off in order to interface with Western academia. Just busy work that they privately consider utterly deranged.
Stuart Russell claims that Xi Jinping has referred to the existential threat of AI to humanity [1].
[1] 5:52 of Russell’s interview on Smerconish: https://www.cnn.com/videos/tech/2023/04/01/smr-experts-demand-pause-on-ai.cnn
I think in the universe where the President is personally asking Xi to consider this seriously, Xi has a good chance of considering it serious. I do not expect to live in that world, but it’s not that much more unlikely than the one where America and the rest of the west rise to the occasion at all.
Imagine you are President Obama. The King of Saudi Arabia gives you a phone call and tries to convince you to ban all US tech companies because they might inadvertently kill God. That is the place we are at in terms of Chinese AI safety awareness in the leadership.
I reject the analogy. America is already winning the tech race, we don’t have to ask China to give up a lead, and westerners do not worry about existential risk because of some enormous cultural gap whereby we care about our lives but Chinese people don’t. This is a much easier bridge to cross than you are making it out to be.