I really don’t know what Beijing is going to do. Sometimes it makes really weird decisions, like not importing better COVID vaccines before the COVID restrictions were removed. There is no law of physics that says Beijing will take X seriously if Washington presents a good argument, or if Washington believes it hard enough. Beijing can be influenced by rational arguments, but mostly by Chinese experts. Right now, the Chinese space isn’t taking AI safety seriously. There is no Chinese Eliezer Yudkowsky. If the US in 2001 was approached by Beijing asking for AI disarmament, they would either assume it was either an attempt at manipulation or a special Chinese brand of crazy. Maybe both.
Put yourself in Beijing’s shoes for a moment. What would you do if you thought Washington seriously believed in AI ruin, but all of your experts told you it was just the West being crazy? I would squeeze as many concessions (especially Taiwan) as possible as the price of compliance before defecting and pursuing AI research, preferably in secret, safe in the knowledge that the US is unwilling to pursue this promising technology. What the hell are they going to do if they catch you? Start up their own AI research programs they believe are going to destroy the world? Maybe they’ll even send you their best AI safety research anyways, and you can use it to further your capabilities research.
I think (Cooperate, Cooperate) isn’t the obvious result here.
Why wouldn’t their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they’re taking advice from would uniformly interpret it as “craziness” especially when surveys show most AI researchers in the west are now taking existential risk seriously? It’s really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.
My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don’t see much reason they wouldn’t be about equally receptive to the fundamental arguments as a western audience once presented with them competently.
Honestly, I would probably be more concerned about convincing western leaders whose “being on board” this debate seems to take as an axiom.
I really don’t know what Beijing is going to do. Sometimes it makes really weird decisions, like not importing better COVID vaccines before the COVID restrictions were removed. There is no law of physics that says Beijing will take X seriously if Washington presents a good argument, or if Washington believes it hard enough. Beijing can be influenced by rational arguments, but mostly by Chinese experts. Right now, the Chinese space isn’t taking AI safety seriously. There is no Chinese Eliezer Yudkowsky. If the US in 2001 was approached by Beijing asking for AI disarmament, they would either assume it was either an attempt at manipulation or a special Chinese brand of crazy. Maybe both.
Put yourself in Beijing’s shoes for a moment. What would you do if you thought Washington seriously believed in AI ruin, but all of your experts told you it was just the West being crazy? I would squeeze as many concessions (especially Taiwan) as possible as the price of compliance before defecting and pursuing AI research, preferably in secret, safe in the knowledge that the US is unwilling to pursue this promising technology. What the hell are they going to do if they catch you? Start up their own AI research programs they believe are going to destroy the world? Maybe they’ll even send you their best AI safety research anyways, and you can use it to further your capabilities research.
I think (Cooperate, Cooperate) isn’t the obvious result here.
Why wouldn’t their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they’re taking advice from would uniformly interpret it as “craziness” especially when surveys show most AI researchers in the west are now taking existential risk seriously? It’s really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.
My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don’t see much reason they wouldn’t be about equally receptive to the fundamental arguments as a western audience once presented with them competently.
Honestly, I would probably be more concerned about convincing western leaders whose “being on board” this debate seems to take as an axiom.
The sub 10 minute arguments aren’t convincing. No sane politician would distrust their experts over online hysteria.