This post is quite strange and at odds with your first one. Your own point 5 contradicts your point 6. If they’re so good at taking ideas seriously, why wouldn’t they respond to coherent reasoning presented by a US president? Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but “superiority” in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.
Instead of invoking tired tropes like the Century of Humiliation, I would mention the tech/economic restrictions recently levied by the US (which are, not inaccurately, broadly seen in China as an attempt to suppress its national development, with comments by Xi to that effect). Any negotiated slowdowns in AI would have to be demonstrated to China as not to be a component of that, which it shouldn’t be hard to if the US is also verifiably halting its own progress, and the AGI x-risk arguments can be clearly communicated.
I really don’t know what Beijing is going to do. Sometimes it makes really weird decisions, like not importing better COVID vaccines before the COVID restrictions were removed. There is no law of physics that says Beijing will take X seriously if Washington presents a good argument, or if Washington believes it hard enough. Beijing can be influenced by rational arguments, but mostly by Chinese experts. Right now, the Chinese space isn’t taking AI safety seriously. There is no Chinese Eliezer Yudkowsky. If the US in 2001 was approached by Beijing asking for AI disarmament, they would either assume it was either an attempt at manipulation or a special Chinese brand of crazy. Maybe both.
Put yourself in Beijing’s shoes for a moment. What would you do if you thought Washington seriously believed in AI ruin, but all of your experts told you it was just the West being crazy? I would squeeze as many concessions (especially Taiwan) as possible as the price of compliance before defecting and pursuing AI research, preferably in secret, safe in the knowledge that the US is unwilling to pursue this promising technology. What the hell are they going to do if they catch you? Start up their own AI research programs they believe are going to destroy the world? Maybe they’ll even send you their best AI safety research anyways, and you can use it to further your capabilities research.
I think (Cooperate, Cooperate) isn’t the obvious result here.
Why wouldn’t their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they’re taking advice from would uniformly interpret it as “craziness” especially when surveys show most AI researchers in the west are now taking existential risk seriously? It’s really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.
My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don’t see much reason they wouldn’t be about equally receptive to the fundamental arguments as a western audience once presented with them competently.
Honestly, I would probably be more concerned about convincing western leaders whose “being on board” this debate seems to take as an axiom.
Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but “superiority” in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.
I have the experience of living in a strongly anti-West country ruled by the same guy for 10+ years (the Putin’s Russia). The list of similarities to Xi’s China includes the Shameful Period of Humiliation often employed by the state media to reinforce the anti-West narrative (in the case of Russia it’s the 1990s).
With this background, I see the points 7 and 8 as valid, and likely applicable to the majority of anti-West governments of the same nature.
7… Our AI policy isn’t OK with second place in the long run. Any AI-restriction treaty that China will accept requires not just Chinese parity, but Chinese superiority in AI
Yep, same for Russia. Even if the Russian gov decides to make the impression of accepting such a treaty, or even if it starts enforcing the treaty among civilian companies, the Russian military will continue to secretly work on military AI anyway. As Putin himself said, “The country that secures a monopoly in the field of artificial intelligence will become the ruler of the world”.
Another of his famous sayings: “there is no value in a world where Russia doesn’t exist” (the context: a discussion about Russia destroying the world with nukes if the West attempts to subjugate Russia).
8. Then again, Beijing is hard to predict. It may agree to an AI disarmament treaty in 6 months, or it might confiscate private GPUs in an effort at mass mobilization, spending billions to build the next LLM. It might do both.
Again, same for Russia. Putin has the reputation of accepting any vaguely reasonable expert proposal, and even several contradicting proposals on the same topic, if the proposers are strongly loyal to Putin.
This sometimes results in the wildest shit becoming a law. For example, Russia banned exports of biological tissue samples, because someone told Putin that it could be used to develop a virus to exclusively kill Russians (which is a biological nonsense).
In general, Russia is way behind the US or China in the field of AI. But several major companies (Yandex, Sber) have demonstrated the ability to adapt and deploy some relatively recent open-source AI tech at scale.
Even with the severe hardware sanctions in place, maybe in 5 years or less there will be a Russian GPT4.
This post is quite strange and at odds with your first one. Your own point 5 contradicts your point 6. If they’re so good at taking ideas seriously, why wouldn’t they respond to coherent reasoning presented by a US president? Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but “superiority” in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.
Instead of invoking tired tropes like the Century of Humiliation, I would mention the tech/economic restrictions recently levied by the US (which are, not inaccurately, broadly seen in China as an attempt to suppress its national development, with comments by Xi to that effect). Any negotiated slowdowns in AI would have to be demonstrated to China as not to be a component of that, which it shouldn’t be hard to if the US is also verifiably halting its own progress, and the AGI x-risk arguments can be clearly communicated.
I really don’t know what Beijing is going to do. Sometimes it makes really weird decisions, like not importing better COVID vaccines before the COVID restrictions were removed. There is no law of physics that says Beijing will take X seriously if Washington presents a good argument, or if Washington believes it hard enough. Beijing can be influenced by rational arguments, but mostly by Chinese experts. Right now, the Chinese space isn’t taking AI safety seriously. There is no Chinese Eliezer Yudkowsky. If the US in 2001 was approached by Beijing asking for AI disarmament, they would either assume it was either an attempt at manipulation or a special Chinese brand of crazy. Maybe both.
Put yourself in Beijing’s shoes for a moment. What would you do if you thought Washington seriously believed in AI ruin, but all of your experts told you it was just the West being crazy? I would squeeze as many concessions (especially Taiwan) as possible as the price of compliance before defecting and pursuing AI research, preferably in secret, safe in the knowledge that the US is unwilling to pursue this promising technology. What the hell are they going to do if they catch you? Start up their own AI research programs they believe are going to destroy the world? Maybe they’ll even send you their best AI safety research anyways, and you can use it to further your capabilities research.
I think (Cooperate, Cooperate) isn’t the obvious result here.
Why wouldn’t their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they’re taking advice from would uniformly interpret it as “craziness” especially when surveys show most AI researchers in the west are now taking existential risk seriously? It’s really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.
My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don’t see much reason they wouldn’t be about equally receptive to the fundamental arguments as a western audience once presented with them competently.
Honestly, I would probably be more concerned about convincing western leaders whose “being on board” this debate seems to take as an axiom.
The sub 10 minute arguments aren’t convincing. No sane politician would distrust their experts over online hysteria.
I have the experience of living in a strongly anti-West country ruled by the same guy for 10+ years (the Putin’s Russia). The list of similarities to Xi’s China includes the Shameful Period of Humiliation often employed by the state media to reinforce the anti-West narrative (in the case of Russia it’s the 1990s).
With this background, I see the points 7 and 8 as valid, and likely applicable to the majority of anti-West governments of the same nature.
Yep, same for Russia. Even if the Russian gov decides to make the impression of accepting such a treaty, or even if it starts enforcing the treaty among civilian companies, the Russian military will continue to secretly work on military AI anyway. As Putin himself said, “The country that secures a monopoly in the field of artificial intelligence will become the ruler of the world”.
Another of his famous sayings: “there is no value in a world where Russia doesn’t exist” (the context: a discussion about Russia destroying the world with nukes if the West attempts to subjugate Russia).
Again, same for Russia. Putin has the reputation of accepting any vaguely reasonable expert proposal, and even several contradicting proposals on the same topic, if the proposers are strongly loyal to Putin.
This sometimes results in the wildest shit becoming a law. For example, Russia banned exports of biological tissue samples, because someone told Putin that it could be used to develop a virus to exclusively kill Russians (which is a biological nonsense).
In general, Russia is way behind the US or China in the field of AI. But several major companies (Yandex, Sber) have demonstrated the ability to adapt and deploy some relatively recent open-source AI tech at scale.
Even with the severe hardware sanctions in place, maybe in 5 years or less there will be a Russian GPT4.