List of requests for an AI slowdown/halt.
Last updated: April 14th 2023.
Pause Giant AI Experiments: An Open Letter
by Future of Life InstitutePausing AI Developments Isn’t Enough. We Need to Shut it All Down
by Eliezer YudkowskyThe Case for Halting AI Development
by Max Tegmark, Lex FridmanLennart Heim on Compute Governance
by Lennart Heim, Future of Life InstituteInstead of technical research, more people should focus on buying time
by Akash, Olivia Jimenez, Thomas LarsenSlowing down AI progress is an underexplored alignment strategy
by Michael HuangSlowing Down AI: Rationales, Proposals, and Difficulties [1]
by Simeon Campos, Henry Papadatos, Charles MWhat an actually pessimistic containment strategy looks like
by lcIn the Matter of OpenAI (FTC 2023) [1]
by Center for AI and Digital PolicyDangers of AI and the End of Human Civilization
by Eliezer Yudkowsky, Lex FridmanWe’re All Gonna Die with Eliezer Yudkowsky
by Eliezer Yudkowsky, BanklessThe public supports regulating AI for safety
by Zach Stein-PerlmanNew survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
by Akash
About this document
There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination. This document aspires to collect all examples into a single list. I’m undecided on how best to order and subdivide the examples, but I’m open to suggestions. Note that I’m also including surveys.
This list is:
Living — I’ll try to update the list over time.
Non-exhaustive — There are almost certainly examples I’ve missed.
Non-representative — The list is biased, at least initially, towards things that I have been shown personally.
Please mention in the comments any examples I’ve missed so I can add them.
- ^
Credit to Zach Stein-Perlman.
- ^
Credit to MM Maas.
Nice.
https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology
https://www.caidp.org/cases/openai/
https://navigatingairisks.substack.com/p/slowing-down-ai-rationales-proposals
Thanks, Zach!
Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.
well-spotted😳
Nice, thanks for collating these!
Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological
and somewhat older:
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022.https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021.https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.
Thanks! I’ve included Erik Hoel’s and lc’s essays.
Your article doesn’t actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —
But if you’ve written anything which explicitly endorses AI restraint then I’ll include that in the list.