To test this claim we could look to China, where AI x-risk concerns are less popular and influential. China passed a regulation on deepfakes in January 2022 and one on recommendation algorithms in March 2022. This year, they passed a regulation on generative AI which requires evaluation of training data and red teaming of model outputs. Perhaps this final measure was the result of listening to ARC and other AI safety folks who popularized model evaluations, but more likely, it seems that red teaming and evaluations are the common sense way for a government to prevent AI misbehavior.
The European Union’s AI Act was also created before any widespread recognition of AI x-risks.
On the other hand, I agree that key provisions in Biden’s executive order appear acutely influenced by AI x-risk concerns. I think it’s likely that without influence from people concerned about x-risk, their actions would more closely resemble the Blueprint for an AI Bill of Rights.
The lesson I draw is that there is plenty of appetite for AI regulation independent of x-risk concerns. But it’s important to make sure that regulation is effective, rather than blunt and untargeted.
The lesson I draw is that there is plenty of appetite for AI regulation independent of x-risk concerns. But it’s important to make sure that regulation is effective, rather than blunt and untargeted.
Yes, this is the lesson I draw too, and it’s precisely what I argue for in the post.
“CESI’s Artificial Intelligence Standardization White Paper released in 2018 states that “AI systems that have a direct impact on the safety of humanity and the safety of life, and may constitute threats to humans” must be regulated and assessed, suggesting a broad threat perception (Section 4.5.7).42 In addition, a TC260 white paper released in 2019 on AI safety/security worries that “emergence” (涌现性) by AI algorithms can exacerbate the black box effect and “autonomy” can lead to algorithmic “self-improvement” (Section 3.2.1.3).43” From https://concordia-consulting.com/wp-content/uploads/2023/10/State-of-AI-Safety-in-China.pdf
To test this claim we could look to China, where AI x-risk concerns are less popular and influential. China passed a regulation on deepfakes in January 2022 and one on recommendation algorithms in March 2022. This year, they passed a regulation on generative AI which requires evaluation of training data and red teaming of model outputs. Perhaps this final measure was the result of listening to ARC and other AI safety folks who popularized model evaluations, but more likely, it seems that red teaming and evaluations are the common sense way for a government to prevent AI misbehavior.
The European Union’s AI Act was also created before any widespread recognition of AI x-risks.
On the other hand, I agree that key provisions in Biden’s executive order appear acutely influenced by AI x-risk concerns. I think it’s likely that without influence from people concerned about x-risk, their actions would more closely resemble the Blueprint for an AI Bill of Rights.
The lesson I draw is that there is plenty of appetite for AI regulation independent of x-risk concerns. But it’s important to make sure that regulation is effective, rather than blunt and untargeted.
Link to China’s red teaming standard — note that their definitions of misbehavior are quite different from yours, and they do not focus on catastrophic risks: https://twitter.com/mattsheehan88/status/1714001598383317459?s=46
Yes, this is the lesson I draw too, and it’s precisely what I argue for in the post.
Full credit to you for seeing this ahead of time, I’ve been surprised by the appetite for regulation.
“CESI’s Artificial Intelligence Standardization White Paper released in 2018 states
that “AI systems that have a direct impact on the safety of humanity and the safety of life,
and may constitute threats to humans” must be regulated and assessed, suggesting a broad
threat perception (Section 4.5.7).42 In addition, a TC260 white paper released in 2019 on AI
safety/security worries that “emergence” (涌现性) by AI algorithms can exacerbate the
black box effect and “autonomy” can lead to algorithmic “self-improvement” (Section
3.2.1.3).43”
From https://concordia-consulting.com/wp-content/uploads/2023/10/State-of-AI-Safety-in-China.pdf