I’ve seen people asking the basic question of how some software could kill people. I want to put together a list of pieces that engage with this question. Here’s my attempt, please add more if you can think of them!
“Why might a superintelligent AI be dangerous?” in the AI Safety FAQ by Stampy
The AI Box Experiment by Eliezer Yudkowsky
That Alien Message by Eliezer Yudkowsky
It Looks Like You’re Trying To Take Over The World by Gwern
Part II of What failure looks like by Paul Christiano
Slow Motion Videos as AI Risk Intuition Pumps by Andrew Critch
AI Could Defeat All Of Us Combined by Holden Karnofsky
(I think the first link is surprisingly good, I only was pointed to it today. I think the rest are also all very helpful.)
I’ve seen people asking the basic question of how some software could kill people. I want to put together a list of pieces that engage with this question. Here’s my attempt, please add more if you can think of them!
“Why might a superintelligent AI be dangerous?” in the AI Safety FAQ by Stampy
The AI Box Experiment by Eliezer Yudkowsky
That Alien Message by Eliezer Yudkowsky
It Looks Like You’re Trying To Take Over The World by Gwern
Part II of What failure looks like by Paul Christiano
Slow Motion Videos as AI Risk Intuition Pumps by Andrew Critch
AI Could Defeat All Of Us Combined by Holden Karnofsky
(I think the first link is surprisingly good, I only was pointed to it today. I think the rest are also all very helpful.)