RSS

AI Risk Skepticism

TagLast edit: Jan 17, 2025, 10:23 PM by Dakara

AI Risk Skepticism is the view that the potential risks posed by artificial intelligence (AI) are overstated or misunderstood, specifically regarding the direct, tangible dangers posed by the behavior of AI systems themselves. Skeptics of object-level AI risk argue that fears of highly autonomous, superintelligent AI leading to catastrophic outcomes are premature or unlikely.

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin PopeMar 21, 2023, 12:06 AM
358 points
232 comments39 min readLW link1 review

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGraceOct 14, 2022, 1:00 PM
370 points
124 comments34 min readLW link1 review
(aiimpacts.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidWFeb 21, 2023, 3:09 PM
89 points
31 comments14 min readLW link1 review

Con­tra Yud­kowsky on AI Doom

jacob_cannellApr 24, 2023, 12:20 AM
89 points
111 comments9 min readLW link

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

Feb 27, 2024, 11:03 PM
100 points
188 comments14 min readLW link

Many ar­gu­ments for AI x-risk are wrong

TurnTroutMar 5, 2024, 2:31 AM
158 points
87 comments12 min readLW link

Evolu­tion is a bad anal­ogy for AGI: in­ner alignment

Quintin PopeAug 13, 2022, 10:15 PM
79 points
15 comments8 min readLW link

Ar­gu­ments for op­ti­mism on AI Align­ment (I don’t en­dorse this ver­sion, will re­u­pload a new ver­sion soon.)

Noosphere89Oct 15, 2023, 2:51 PM
28 points
129 comments25 min readLW link

Order Mat­ters for De­cep­tive Alignment

DavidWFeb 15, 2023, 7:56 PM
57 points
19 comments7 min readLW link

The Paris AI Anti-Safety Summit

ZviFeb 12, 2025, 2:00 PM
129 points
21 comments21 min readLW link
(thezvi.wordpress.com)

Two Tales of AI Takeover: My Doubts

Violet HourMar 5, 2024, 3:51 PM
30 points
8 comments29 min readLW link

The bul­ls­eye frame­work: My case against AI doom

titotalMay 30, 2023, 11:52 AM
89 points
35 comments1 min readLW link

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin PopeApr 11, 2023, 6:43 PM
206 points
65 comments15 min readLW link1 review

De­cep­tive Align­ment and Homuncularity

Jan 16, 2025, 1:55 PM
25 points
12 comments22 min readLW link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

May 28, 2023, 7:10 PM
39 points
14 comments26 min readLW link

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere89Jun 8, 2023, 2:33 PM
5 points
2 comments2 min readLW link

Why I am not an AI ex­tinc­tion cautionista

ShmiJun 18, 2023, 9:28 PM
22 points
40 comments2 min readLW link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotalMay 17, 2023, 11:58 AM
5 points
3 comments1 min readLW link

Linkpost: A tale of 2.5 or­thog­o­nal­ity theses

DavidWMar 13, 2023, 2:19 PM
9 points
3 comments1 min readLW link
(forum.effectivealtruism.org)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël TrazziJun 14, 2022, 7:09 PM
41 points
12 comments4 min readLW link
(theinsideview.ai)

Linkpost: A Con­tra AI FOOM Read­ing List

DavidWMar 13, 2023, 2:45 PM
25 points
4 comments1 min readLW link
(magnusvinding.com)

[Question] What Do AI Safety Pitches Not Get About Your Field?

ArisSep 22, 2022, 9:27 PM
28 points
3 comments1 min readLW link

Linkpost: ‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

DavidWMar 13, 2023, 4:52 PM
6 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter BerggrenJul 6, 2023, 5:32 PM
18 points
6 comments2 min readLW link

Get­tier Cases [re­post]

AntigoneFeb 3, 2025, 6:12 PM
−4 points
4 comments2 min readLW link

[Question] how do the CEOs re­spond to our con­cerns?

KvmanThinkingFeb 11, 2025, 11:39 PM
−13 points
7 comments1 min readLW link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora BelroseMar 11, 2024, 5:58 AM
16 points
14 comments1 min readLW link
(www.youtube.com)

[Link] Sarah Con­stantin: “Why I am Not An AI Doomer”

lbThingrbApr 12, 2023, 1:52 AM
61 points
13 comments1 min readLW link
(sarahconstantin.substack.com)
No comments.