RSS

Ob­ject-Level AI Risk Skepticism

TagLast edit: 11 Mar 2023 16:30 UTC by DavidW

Posts that express object-level reasons to be skeptical of core AI X-Risk arguments.

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 0:06 UTC
357 points
230 comments39 min readLW link

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGrace14 Oct 2022 13:00 UTC
370 points
124 comments34 min readLW link1 review
(aiimpacts.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:09 UTC
90 points
29 comments14 min readLW link

Con­tra Yud­kowsky on AI Doom

jacob_cannell24 Apr 2023 0:20 UTC
88 points
111 comments9 min readLW link

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

27 Feb 2024 23:03 UTC
95 points
188 comments14 min readLW link

Many ar­gu­ments for AI x-risk are wrong

TurnTrout5 Mar 2024 2:31 UTC
167 points
86 comments12 min readLW link

Evolu­tion is a bad anal­ogy for AGI: in­ner alignment

Quintin Pope13 Aug 2022 22:15 UTC
79 points
15 comments8 min readLW link

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:43 UTC
206 points
62 comments15 min readLW link

Two Tales of AI Takeover: My Doubts

Violet Hour5 Mar 2024 15:51 UTC
30 points
8 comments29 min readLW link

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
89 points
35 comments1 min readLW link

Order Mat­ters for De­cep­tive Alignment

DavidW15 Feb 2023 19:56 UTC
57 points
19 comments7 min readLW link

Ar­gu­ments for op­ti­mism on AI Align­ment (I don’t en­dorse this ver­sion, will re­u­pload a new ver­sion soon.)

Noosphere8915 Oct 2023 14:51 UTC
26 points
127 comments25 min readLW link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

28 May 2023 19:10 UTC
39 points
14 comments26 min readLW link

[Question] What Do AI Safety Pitches Not Get About Your Field?

Aris22 Sep 2022 21:27 UTC
28 points
3 comments1 min readLW link

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere898 Jun 2023 14:33 UTC
5 points
2 comments2 min readLW link

Linkpost: ‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

DavidW13 Mar 2023 16:52 UTC
6 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Why I am not an AI ex­tinc­tion cautionista

Shmi18 Jun 2023 21:28 UTC
22 points
40 comments2 min readLW link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
5 points
3 comments1 min readLW link

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter Berggren6 Jul 2023 17:32 UTC
17 points
6 comments2 min readLW link

Linkpost: A tale of 2.5 or­thog­o­nal­ity theses

DavidW13 Mar 2023 14:19 UTC
9 points
3 comments1 min readLW link
(forum.effectivealtruism.org)

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora Belrose11 Mar 2024 5:58 UTC
16 points
14 comments1 min readLW link
(www.youtube.com)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël Trazzi14 Jun 2022 19:09 UTC
41 points
12 comments4 min readLW link
(theinsideview.ai)

[Link] Sarah Con­stantin: “Why I am Not An AI Doomer”

lbThingrb12 Apr 2023 1:52 UTC
61 points
13 comments1 min readLW link
(sarahconstantin.substack.com)

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubK21 Apr 2023 10:07 UTC
13 points
0 comments5 min readLW link
(sohl-dickstein.github.io)

Linkpost: A Con­tra AI FOOM Read­ing List

DavidW13 Mar 2023 14:45 UTC
25 points
4 comments1 min readLW link
(magnusvinding.com)
No comments.