Robin Hanson & Liron Shapira Debate AI X-Risk

Link post

Robin and I just had an interesting 2-hour AI doom debate. We picked up where the Hanson-Yudkowsky Foom Debate left off in 2008, revisiting key arguments in the light of recent AI advances.

A video thumbnail with the text "AI Doom Debate" and side-by-side photos of Robin Hanson and Liron Shapira

My position is similar to Eliezer’s: P(doom) on the order of 50%.

Robin’s position remains shockingly different: P(doom) < 1%.

I think we managed to illuminate some of our cruxes of disagreement, though by no means all. Let us know your thoughts and feedback!

Where To Watch/​Listen/​Read

For most casually-interested readers, I recommend consuming the debate via my debate highlights and analysis post, which has a clean transcript and its own video podcast to go with it.

Topics

  • AI timelines

  • The “outside view” of economic growth trends

  • Future economic doubling times

  • The role of culture in human intelligence

  • Lessons from human evolution and brain size

  • Intelligence increase gradient near human level

  • Bostrom’s Vulnerable World hypothesis

  • The optimization-power view

  • Feasibility of AI alignment

  • Will AI be “above the law” relative to humans

About Doom Debates

My podcast, Doom Debates, hosts high-quality debates between people who don’t see eye-to-eye on the urgent issue of AI extinction risk.

All kinds of guests are welcome, from luminaries to curious randos. If you’re interested to be part of an episode, DM me here or contact me via Twitter or email.

If you’re interested in the content, please subscribe and share it to help grow its reach.