There seems to be a huge jump from: there’s no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to… all the other stuff which seems completely unrelated?
I acknowledge this. My thinking is a bit scattered and my posts are often just an attempt to articulate publically somewhere intuitions that I have no outlet elsewhere to discuss and refine.
I’m saying first off, there is no moat. Yet I observe people on this and similar forums with the usual refrain: but look, the West is so far ahead in doing X in AI, so we shouldn’t use China as a boogie man when discussing AI policy. I claim this is bogus. The West isn’t far ahead in X because everything can be fast copied, stolen, brute forced and limits on hardware, etc. appear ineffective. Lots of the arguments in favor of disregarding China in setting AI safety policy assume it being perpetually a few steps behind. But if they are getting similar performance, then they aren’t behind.
So if there is no moat, and we can expect peer performance soon, then we should be worried because we have reason to believe that if scaling + tweaks can reach AGI, then China might conceivably get AGI first, which would be very bad. I have seen replies to this point of: well, how do you know it would be that much worse? Surely Xi wants human flourishing as well. And my response is: governments do terrible things. At least in the West, the public can see these terrible things and sometimes say, hey: I object. This is bad. The PRC has no mechanism. So AGI would be dangerous in their hands in a way that it might not be...at least initially, in the West...and the PRC is starting from a not so pro-flourishing position (Uighur slavery and genocide, pro-Putinism, invade Taiwan fever, debt trap diplomacy, secret police abroad, etc.).
If you think AGI kills everyone anyway, then this doesn’t matter. If you think AGI just makes the group possessing it really powerful and able to disempower or destroy competitors, then this REALLY matters, and policies designed to hinder Western AI development could mean Western disempowerment, subjugation, etc.
I make no guarantees about the coherence of this argument and welcome critiques. Personally, I hope to be wrong.
There seems to be a huge jump from: there’s no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to… all the other stuff which seems completely unrelated?
I acknowledge this. My thinking is a bit scattered and my posts are often just an attempt to articulate publically somewhere intuitions that I have no outlet elsewhere to discuss and refine.
I’m saying first off, there is no moat. Yet I observe people on this and similar forums with the usual refrain: but look, the West is so far ahead in doing X in AI, so we shouldn’t use China as a boogie man when discussing AI policy. I claim this is bogus. The West isn’t far ahead in X because everything can be fast copied, stolen, brute forced and limits on hardware, etc. appear ineffective. Lots of the arguments in favor of disregarding China in setting AI safety policy assume it being perpetually a few steps behind. But if they are getting similar performance, then they aren’t behind.
So if there is no moat, and we can expect peer performance soon, then we should be worried because we have reason to believe that if scaling + tweaks can reach AGI, then China might conceivably get AGI first, which would be very bad. I have seen replies to this point of: well, how do you know it would be that much worse? Surely Xi wants human flourishing as well. And my response is: governments do terrible things. At least in the West, the public can see these terrible things and sometimes say, hey: I object. This is bad. The PRC has no mechanism. So AGI would be dangerous in their hands in a way that it might not be...at least initially, in the West...and the PRC is starting from a not so pro-flourishing position (Uighur slavery and genocide, pro-Putinism, invade Taiwan fever, debt trap diplomacy, secret police abroad, etc.).
If you think AGI kills everyone anyway, then this doesn’t matter. If you think AGI just makes the group possessing it really powerful and able to disempower or destroy competitors, then this REALLY matters, and policies designed to hinder Western AI development could mean Western disempowerment, subjugation, etc.
I make no guarantees about the coherence of this argument and welcome critiques. Personally, I hope to be wrong.