RSS

Interviews

TagLast edit: 26 Nov 2021 14:28 UTC by Multicore

Interviews

Related Pages: Interview Series On Risks From AI, Dialogue (format)

My In­ter­view With Cade Metz on His Re­port­ing About Slate Star Codex

Zack_M_Davis26 Mar 2024 17:18 UTC
188 points
187 comments6 min readLW link

Ban­kless Pod­cast: 159 - We’re All Gonna Die with Eliezer Yudkowsky

bayesed20 Feb 2023 16:42 UTC
83 points
54 comments1 min readLW link
(www.youtube.com)

AXRP Epi­sode 32 - Un­der­stand­ing Agency with Jan Kulveit

DanielFilan30 May 2024 3:50 UTC
20 points
0 comments53 min readLW link

AXRP Epi­sode 27 - AI Con­trol with Buck Sh­legeris and Ryan Greenblatt

DanielFilan11 Apr 2024 21:30 UTC
69 points
10 comments107 min readLW link

AXRP Epi­sode 28 - Su­ing Labs for AI Risk with Gabriel Weil

DanielFilan17 Apr 2024 21:42 UTC
12 points
0 comments65 min readLW link

AXRP Epi­sode 29 - Science of Deep Learn­ing with Vikrant Varma

DanielFilan25 Apr 2024 19:10 UTC
20 points
1 comment63 min readLW link

AXRP Epi­sode 38.3 - Erik Jen­ner on Learned Look-Ahead

DanielFilan12 Dec 2024 5:40 UTC
20 points
0 comments16 min readLW link

AXRP Epi­sode 30 - AI Se­cu­rity with Jeffrey Ladish

DanielFilan1 May 2024 2:50 UTC
25 points
0 comments79 min readLW link

AXRP Epi­sode 31 - Sin­gu­lar Learn­ing The­ory with Daniel Murfet

DanielFilan7 May 2024 3:50 UTC
72 points
4 comments71 min readLW link

AXRP Epi­sode 33 - RLHF Prob­lems with Scott Emmons

DanielFilan12 Jun 2024 3:30 UTC
34 points
0 comments56 min readLW link

AXRP Epi­sode 34 - AI Eval­u­a­tions with Beth Barnes

DanielFilan28 Jul 2024 3:30 UTC
23 points
0 comments69 min readLW link

What is it like to be psy­cholog­i­cally healthy? Pod­cast ft. DaystarEld

5 Oct 2024 19:14 UTC
31 points
8 comments2 min readLW link
(chrislakin.blog)

AXRP Epi­sode 37 - Jaime Sevilla on Fore­cast­ing AI

DanielFilan4 Oct 2024 21:00 UTC
21 points
3 comments56 min readLW link

AXRP Epi­sode 36 - Adam Shai and Paul Riech­ers on Com­pu­ta­tional Mechanics

DanielFilan29 Sep 2024 5:50 UTC
25 points
0 comments55 min readLW link

“Dangers of AI and the End of Hu­man Civ­i­liza­tion” Yud­kowsky on Lex Fridman

DragonGod30 Mar 2023 15:43 UTC
38 points
32 comments1 min readLW link
(www.youtube.com)

AXRP Epi­sode 35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

DanielFilan24 Aug 2024 22:30 UTC
21 points
0 comments74 min readLW link

AXRP Epi­sode 20 - ‘Re­form’ AI Align­ment with Scott Aaronson

DanielFilan12 Apr 2023 21:30 UTC
22 points
2 comments68 min readLW link

[In­ter­view w/​ Quintin Pope] Evolu­tion, val­ues, and AI Safety

fowlertm24 Oct 2023 13:53 UTC
11 points
0 comments1 min readLW link

AXRP Epi­sode 26 - AI Gover­nance with Eliz­a­beth Seger

DanielFilan26 Nov 2023 23:00 UTC
14 points
0 comments66 min readLW link

Ge­offrey Miller on Effec­tive Altru­ism and Rationality

Jacob Falkovich15 Jun 2018 17:05 UTC
19 points
0 comments1 min readLW link
(putanumonit.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
58 points
27 comments89 min readLW link

In­ter­view on IQ, genes, and ge­netic en­g­ineer­ing with ex­pert (Hsu)

James_Miller28 May 2017 22:19 UTC
7 points
8 comments1 min readLW link
(www.youtube.com)

Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

Palus Astra1 Jul 2020 17:30 UTC
35 points
4 comments67 min readLW link

AXRP Epi­sode 21 - In­ter­pretabil­ity for Eng­ineers with Stephen Casper

DanielFilan2 May 2023 0:50 UTC
12 points
1 comment66 min readLW link

Robin Han­son and I talk about AI risk

KatjaGrace4 May 2023 22:20 UTC
39 points
8 comments1 min readLW link
(worldspiritsockpuppet.com)

AXRP Epi­sode 22 - Shard The­ory with Quintin Pope

DanielFilan15 Jun 2023 19:00 UTC
52 points
11 comments93 min readLW link

deluks917 on On­line Weirdos

Jacob Falkovich24 Nov 2018 17:03 UTC
24 points
3 comments10 min readLW link

AXRP Epi­sode 23 - Mechanis­tic Ano­maly De­tec­tion with Mark Xu

DanielFilan27 Jul 2023 1:50 UTC
22 points
0 comments72 min readLW link

AXRP Epi­sode 24 - Su­per­al­ign­ment with Jan Leike

DanielFilan27 Jul 2023 4:00 UTC
55 points
3 comments69 min readLW link

Ilya Sutskever’s thoughts on AI safety (July 2023): a tran­script with my comments

mishka10 Aug 2023 19:07 UTC
21 points
3 comments5 min readLW link

A Key Power of the Pres­i­dent is to Co­or­di­nate the Ex­e­cu­tion of Ex­ist­ing Con­crete Plans

Ben Pace16 Jul 2019 5:06 UTC
152 points
13 comments10 min readLW link

AXRP Epi­sode 25 - Co­op­er­a­tive AI with Cas­par Oesterheld

DanielFilan3 Oct 2023 21:50 UTC
43 points
0 comments92 min readLW link

AXRP Epi­sode 1 - Ad­ver­sar­ial Poli­cies with Adam Gleave

DanielFilan29 Dec 2020 20:41 UTC
12 points
5 comments34 min readLW link

AXRP Epi­sode 5 - In­fra-Bayesi­anism with Vanessa Kosoy

DanielFilan10 Mar 2021 4:30 UTC
35 points
12 comments36 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilan29 Dec 2020 20:45 UTC
27 points
0 comments28 min readLW link

AXRP Epi­sode 2 - Learn­ing Hu­man Bi­ases with Ro­hin Shah

DanielFilan29 Dec 2020 20:43 UTC
13 points
0 comments35 min readLW link

AXRP Epi­sode 8 - As­sis­tance Games with Dy­lan Had­field-Menell

DanielFilan8 Jun 2021 23:20 UTC
22 points
1 comment72 min readLW link

AXRP Epi­sode 7 - Side Effects with Vic­to­ria Krakovna

DanielFilan14 May 2021 3:50 UTC
34 points
6 comments43 min readLW link

AXRP Epi­sode 7.5 - Fore­cast­ing Trans­for­ma­tive AI from Biolog­i­cal An­chors with Ajeya Cotra

DanielFilan28 May 2021 0:20 UTC
24 points
1 comment67 min readLW link

AXRP Epi­sode 6 - De­bate and Imi­ta­tive Gen­er­al­iza­tion with Beth Barnes

DanielFilan8 Apr 2021 21:20 UTC
26 points
3 comments60 min readLW link

AXRP Epi­sode 9 - Finite Fac­tored Sets with Scott Garrabrant

DanielFilan24 Jun 2021 22:10 UTC
59 points
2 comments59 min readLW link

I wanted to in­ter­view Eliezer Yud­kowsky but he’s busy so I simu­lated him instead

lsusr16 Sep 2021 7:34 UTC
111 points
33 comments5 min readLW link

AXRP Epi­sode 10 - AI’s Fu­ture and Im­pacts with Katja Grace

DanielFilan23 Jul 2021 22:10 UTC
34 points
2 comments77 min readLW link

AXRP Epi­sode 11 - At­tain­able Utility and Power with Alex Turner

DanielFilan25 Sep 2021 21:10 UTC
19 points
5 comments53 min readLW link

AXRP Epi­sode 12 - AI Ex­is­ten­tial Risk with Paul Christiano

DanielFilan2 Dec 2021 2:20 UTC
38 points
0 comments126 min readLW link

AXRP Epi­sode 13 - First Prin­ci­ples of AGI Safety with Richard Ngo

DanielFilan31 Mar 2022 5:20 UTC
24 points
1 comment48 min readLW link

AXRP Epi­sode 14 - In­fra-Bayesian Phys­i­cal­ism with Vanessa Kosoy

DanielFilan5 Apr 2022 23:10 UTC
25 points
10 comments52 min readLW link

Dun­can Sa­bien On Writing

lynettebye7 Apr 2022 16:09 UTC
36 points
3 comments16 min readLW link

AXRP Epi­sode 15 - Nat­u­ral Ab­strac­tions with John Wentworth

DanielFilan23 May 2022 5:40 UTC
34 points
1 comment58 min readLW link

AXRP Epi­sode 16 - Prepar­ing for De­bate AI with Ge­offrey Irving

DanielFilan1 Jul 2022 22:20 UTC
20 points
0 comments37 min readLW link

AXRP Epi­sode 17 - Train­ing for Very High Reli­a­bil­ity with Daniel Ziegler

DanielFilan21 Aug 2022 23:50 UTC
16 points
0 comments35 min readLW link

AXRP Epi­sode 18 - Con­cept Ex­trap­o­la­tion with Stu­art Armstrong

DanielFilan3 Sep 2022 23:12 UTC
12 points
1 comment39 min readLW link

Me (Steve Byrnes) on the “Brain In­spired” podcast

Steven Byrnes30 Oct 2022 19:15 UTC
26 points
1 comment1 min readLW link
(braininspired.co)

Pod­cast with Divia Eden on op­er­ant conditioning

DanielFilan15 Jan 2023 2:44 UTC
14 points
0 comments1 min readLW link
(youtu.be)

AXRP Epi­sode 19 - Mechanis­tic In­ter­pretabil­ity with Neel Nanda

DanielFilan4 Feb 2023 3:00 UTC
45 points
0 comments117 min readLW link

Pod­cast with Oli Habryka on LessWrong /​ Light­cone Infrastructure

DanielFilan5 Feb 2023 2:52 UTC
88 points
20 comments1 min readLW link
(thefilancabinet.com)

AXRP Epi­sode 38.0 - Zhijing Jin on LLMs, Causal­ity, and Multi-Agent Systems

DanielFilan14 Nov 2024 7:00 UTC
14 points
0 comments12 min readLW link

Gw­ern Bran­wen in­ter­view on Dwarkesh Pa­tel’s pod­cast: “How an Anony­mous Re­searcher Pre­dicted AI’s Tra­jec­tory”

Said Achmiz14 Nov 2024 23:53 UTC
80 points
0 comments1 min readLW link
(www.dwarkeshpatel.com)

AXRP Epi­sode 38.1 - Alan Chan on Agent Infrastructure

DanielFilan16 Nov 2024 23:30 UTC
12 points
0 comments14 min readLW link

AXRP Epi­sode 38.2 - Jesse Hoogland on Sin­gu­lar Learn­ing Theory

DanielFilan27 Nov 2024 6:30 UTC
34 points
0 comments10 min readLW link

AXRP Epi­sode 39 - Evan Hub­inger on Model Or­ganisms of Misalignment

DanielFilan1 Dec 2024 6:00 UTC
41 points
0 comments67 min readLW link

EAI Align­ment Speaker Series #1: Challenges for Safe & Benefi­cial Brain-Like Ar­tifi­cial Gen­eral In­tel­li­gence with Steve Byrnes

23 Mar 2023 14:32 UTC
28 points
0 comments27 min readLW link
(youtu.be)

Robin Han­son on the fu­tur­ist fo­cus on AI

abergal13 Nov 2019 21:50 UTC
31 points
24 comments1 min readLW link
(aiimpacts.org)

In­ter­view with Bill O’Rourke—Rus­sian Cor­rup­tion, Putin, Ap­plied Ethics, and More

JohnGreer27 Oct 2024 17:11 UTC
3 points
0 comments6 min readLW link

Pod­cast: Eye4AI on 2023 Survey

KatjaGrace16 May 2024 7:40 UTC
8 points
0 comments1 min readLW link
(worldspiritsockpuppet.com)

BHTV: Jaron Lanier and Yudkowsky

Eliezer Yudkowsky1 Nov 2008 17:04 UTC
8 points
66 comments1 min readLW link

BHTV: de Grey and Yudkowsky

Eliezer Yudkowsky13 Dec 2008 15:28 UTC
10 points
12 comments1 min readLW link

[Question] Value of Query­ing 100+ Peo­ple About Hu­man­ity’s Future

Fer32dwt34r3dfsz8 Nov 2022 0:41 UTC
9 points
3 comments2 min readLW link

In­ter­view with Putanumonit

Jacob Falkovich24 Apr 2019 14:53 UTC
15 points
1 comment1 min readLW link

[Link] My In­ter­view with Dilbert cre­ator Scott Adams

James_Miller13 Sep 2016 5:22 UTC
17 points
27 comments1 min readLW link

BHTV: Yud­kowsky /​ Wilkinson

Eliezer Yudkowsky26 Jan 2009 1:10 UTC
4 points
19 comments1 min readLW link

BHTV: Yud­kowsky /​ Robert Greene

Eliezer Yudkowsky16 Nov 2009 20:26 UTC
16 points
24 comments1 min readLW link

Ethan Perez on the In­verse Scal­ing Prize, Lan­guage Feed­back and Red Teaming

Michaël Trazzi24 Aug 2022 16:35 UTC
26 points
0 comments3 min readLW link
(theinsideview.ai)

Link: In­ter­view with Vladimir Vapnik

Daniel_Burfoot25 Jul 2009 13:36 UTC
22 points
7 comments2 min readLW link

AXRP Epi­sode 4 - Risks from Learned Op­ti­miza­tion with Evan Hubinger

DanielFilan18 Feb 2021 0:03 UTC
43 points
10 comments87 min readLW link

Quotes from the WWMoR Pod­cast Epi­sode with Eliezer

MondSemmel13 Mar 2021 21:43 UTC
95 points
3 comments4 min readLW link

In­ter­view with Olle Häg­gström: Rea­son, COVID-19 and Aca­demic Free­dom in Sweden

Erich_Grunewald21 Aug 2021 15:08 UTC
8 points
0 comments2 min readLW link
(www.erichgrunewald.com)

Con­ver­sa­tion with Paul Christiano

abergal11 Sep 2019 23:20 UTC
44 points
6 comments30 min readLW link
(aiimpacts.org)

AI Align­ment Pod­cast: On Lethal Au­tonomous Weapons with Paul Scharre

Palus Astra16 Mar 2020 23:00 UTC
12 points
0 comments48 min readLW link

FLI Pod­cast: The Precipice: Ex­is­ten­tial Risk and the Fu­ture of Hu­man­ity with Toby Ord

Palus Astra1 Apr 2020 1:02 UTC
7 points
1 comment46 min readLW link

FLI Pod­cast: On Su­perfore­cast­ing with Robert de Neufville

Palus Astra30 Apr 2020 23:08 UTC
6 points
0 comments52 min readLW link

Tran­scrip­tion of Eliezer’s Jan­uary 2010 video Q&A

curiousepic14 Nov 2011 17:02 UTC
112 points
9 comments56 min readLW link

[Tran­script] Richard Feyn­man on Why Questions

Grognor8 Jan 2012 19:01 UTC
116 points
45 comments5 min readLW link

In­ter­view with Matt Freeman

Evenflair21 Nov 2022 18:17 UTC
15 points
0 comments1 min readLW link
(overcast.fm)

Ro­hin Shah on rea­sons for AI optimism

abergal31 Oct 2019 12:10 UTC
40 points
58 comments1 min readLW link
(aiimpacts.org)

Notes from a con­ver­sa­tion with Ing. Agr. Adri­ana Balzarini

Pablo Repetto8 May 2022 15:56 UTC
5 points
0 comments2 min readLW link
(pabloernesto.github.io)

Si­tu­at­ing LessWrong in con­tem­po­rary philos­o­phy: An in­ter­view with Jon Livengood

Suspended Reason1 Jul 2020 0:37 UTC
117 points
21 comments19 min readLW link

Q&A with Jür­gen Sch­mid­hu­ber on risks from AI

XiXiDu15 Jun 2011 15:51 UTC
61 points
45 comments4 min readLW link

Blog­ging­heads: Yud­kowsky and Horgan

Eliezer Yudkowsky7 Jun 2008 22:09 UTC
7 points
37 comments1 min readLW link

Q&A with ex­perts on risks from AI #1

XiXiDu8 Jan 2012 11:46 UTC
45 points
67 comments9 min readLW link

Q&A with Stan Fran­klin on risks from AI

XiXiDu11 Jun 2011 15:22 UTC
36 points
10 comments2 min readLW link

Did you en­joy Ramez Naam’s “Nexus” tril­ogy? Check out this in­ter­view on neu­rotech and the law.

fowlertm11 Oct 2022 11:10 UTC
5 points
0 comments1 min readLW link

Aella on Ra­tion­al­ity and the Void

Jacob Falkovich31 Oct 2019 21:40 UTC
27 points
8 comments15 min readLW link

In­ter­view with Chris­tine M. Kors­gaard: An­i­mal Ethics, Kan­ti­anism, Utilitarianism

Erich_Grunewald8 May 2021 11:44 UTC
11 points
2 comments1 min readLW link
(www.erichgrunewald.com)

GiveWell in­ter­view with ma­jor SIAI donor Jaan Tallinn

jsalvatier19 Jul 2011 15:10 UTC
25 points
8 comments1 min readLW link

My hour-long in­ter­view with Yud­kowsky on “Be­com­ing a Ra­tion­al­ist”

lukeprog6 Feb 2011 3:19 UTC
33 points
22 comments1 min readLW link

Muehlhauser-Wang Dialogue

lukeprog22 Apr 2012 22:40 UTC
34 points
288 comments12 min readLW link

Q&A with Abram Dem­ski on risks from AI

XiXiDu17 Jan 2012 9:43 UTC
33 points
71 comments9 min readLW link

Q&A with ex­perts on risks from AI #2

XiXiDu9 Jan 2012 19:40 UTC
22 points
29 comments7 min readLW link

See Eliezer talk with PZ My­ers and David Brin (and me) about im­mor­tal­ity this Sunday

Eneasz17 Jul 2013 15:56 UTC
26 points
5 comments1 min readLW link
No comments.