This sequence is on slowing AI from an x-risk perspective.
The posts are in order of priority/quality. The first three are important; the next three are good; the last three are bad. How to think about slowing AI is the best short introduction.
I think my “Foundations” post is by far the best source on the considerations relevant to slowing AI. I hope it informs people’s analysis of possible ways to slow AI (improving interventions) and advances discussions on relevant considerations (improving foundations).
Slowing AI is not monolithic. We should expect some possible interventions to be bad and some to be great. And “slowing AI” is often a bad conceptual handle for the true goal, slowing AI + focusing on extending crunch time + focusing on slowing risky stuff + promoting various side goals + lots more nuances or something.
Thanks to Lukas Gloor, Rose Hadshar, Lionel Levine, and others for comments on drafts. Thanks to Olivia Jimenez, Alex Gray, Katja Grace, Tom Davidson, Alex Lintz, Jeffrey Ladish, Ashwin Acharya, Rick Korzekwa, Siméon Campos, and many others for discussion.
This work in progress is part of a project supported by AI Impacts.