“CESI’s Artificial Intelligence Standardization White Paper released in 2018 states that “AI systems that have a direct impact on the safety of humanity and the safety of life, and may constitute threats to humans” must be regulated and assessed, suggesting a broad threat perception (Section 4.5.7).42 In addition, a TC260 white paper released in 2019 on AI safety/security worries that “emergence” (涌现性) by AI algorithms can exacerbate the black box effect and “autonomy” can lead to algorithmic “self-improvement” (Section 3.2.1.3).43” From https://concordia-consulting.com/wp-content/uploads/2023/10/State-of-AI-Safety-in-China.pdf
“CESI’s Artificial Intelligence Standardization White Paper released in 2018 states
that “AI systems that have a direct impact on the safety of humanity and the safety of life,
and may constitute threats to humans” must be regulated and assessed, suggesting a broad
threat perception (Section 4.5.7).42 In addition, a TC260 white paper released in 2019 on AI
safety/security worries that “emergence” (涌现性) by AI algorithms can exacerbate the
black box effect and “autonomy” can lead to algorithmic “self-improvement” (Section
3.2.1.3).43”
From https://concordia-consulting.com/wp-content/uploads/2023/10/State-of-AI-Safety-in-China.pdf