Youtube link: https://www.youtube.com/watch?v=fZlZQCTqIEo
From the description: “Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk’s Russ Roberts on why we should be very, very afraid and why we’re not prepared or able to manage the terrifying risks of AI.”
Transcript (partial?): https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/#audio-highlights
I thought this was one of Eliezer’s better interviews, at least if we’re talking about the recent ones. But the reaction of the usual audience for this podcast was mixed at best. (See comments.)
I actually read The Sequences, so I can understand him, but it seems that a lot of the commenters didn’t, and those that said they did didn’t buy his specific arguments. Inferential gaps are hard, and hard to bridge in an hour.
We might need something like WIRED’s “X explains Y in 5 levels of difficulty” series where X=Yudkowsky and Y=notkilleveryoneism before a general audience “gets it”.