Thanks! I’ve included Erik Hoel’s and lc’s essays.
Your article doesn’t actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —
This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies.
But if you’ve written anything which explicitly endorses AI restraint then I’ll include that in the list.
Nice, thanks for collating these!
Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological
and somewhat older:
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022.https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021.https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.
Thanks! I’ve included Erik Hoel’s and lc’s essays.
Your article doesn’t actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —
But if you’ve written anything which explicitly endorses AI restraint then I’ll include that in the list.