This is very interesting! I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk. This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.
I’m not sure how to reconcile this survey’s results with my previous model. Was I just wrong and updating too much on anecdotal evidence? How representative of policymakers and of influential scientists do you think these results are?
I think the crux is that the thoughts of the CCP and Chinese citizens don’t necessarily have to have a strong correlation—in many ways they can be orthogonal, and sometimes even negatively correlated (like when the gov trades off on personal freedoms for national security).
I think recent trends suggest the Chinese gov / Xi Jingping are taking risks (especially the tail risks) more seriously, and have done some promising AI safety stuff. Still mostly unclear, tho. Highly recommend checking out Concordia AI’s The State of AI Safety in China Spring 2024 Report.
This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk.
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.
I’m not sure how to reconcile this survey’s results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?
I think the crux is that the thoughts of the CCP and Chinese citizens don’t necessarily have to have a strong correlation—in many ways they can be orthogonal, and sometimes even negatively correlated (like when the gov trades off on personal freedoms for national security).
I think recent trends suggest the Chinese gov / Xi Jingping are taking risks (especially the tail risks) more seriously, and have done some promising AI safety stuff. Still mostly unclear, tho. Highly recommend checking out Concordia AI’s The State of AI Safety in China Spring 2024 Report.