Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Object-Level AI Risk Skepticism
Tag
Last edit:
11 Mar 2023 16:30 UTC
by
DavidW
Posts that express object-level reasons to be skeptical of core AI X-Risk arguments.
Relevant
New
Old
My Objections to “We’re All Gonna Die with Eliezer Yudkowsky”
Quintin Pope
21 Mar 2023 0:06 UTC
357
points
230
comments
39
min read
LW
link
Counterarguments to the basic AI x-risk case
KatjaGrace
14 Oct 2022 13:00 UTC
370
points
124
comments
34
min read
LW
link
1
review
(aiimpacts.org)
Deceptive Alignment is <1% Likely by Default
DavidW
21 Feb 2023 15:09 UTC
90
points
29
comments
14
min read
LW
link
Contra Yudkowsky on AI Doom
jacob_cannell
24 Apr 2023 0:20 UTC
88
points
111
comments
9
min read
LW
link
Counting arguments provide no evidence for AI doom
Nora Belrose
and
Quintin Pope
27 Feb 2024 23:03 UTC
95
points
188
comments
14
min read
LW
link
Many arguments for AI x-risk are wrong
TurnTrout
5 Mar 2024 2:31 UTC
167
points
86
comments
12
min read
LW
link
Evolution is a bad analogy for AGI: inner alignment
Quintin Pope
13 Aug 2022 22:15 UTC
79
points
15
comments
8
min read
LW
link
Evolution provides no evidence for the sharp left turn
Quintin Pope
11 Apr 2023 18:43 UTC
206
points
62
comments
15
min read
LW
link
Two Tales of AI Takeover: My Doubts
Violet Hour
5 Mar 2024 15:51 UTC
30
points
8
comments
29
min read
LW
link
The bullseye framework: My case against AI doom
titotal
30 May 2023 11:52 UTC
89
points
35
comments
1
min read
LW
link
Order Matters for Deceptive Alignment
DavidW
15 Feb 2023 19:56 UTC
57
points
19
comments
7
min read
LW
link
Arguments for optimism on AI Alignment (I don’t endorse this version, will reupload a new version soon.)
Noosphere89
15 Oct 2023 14:51 UTC
26
points
127
comments
25
min read
LW
link
Language Agents Reduce the Risk of Existential Catastrophe
cdkg
and
Simon Goldstein
28 May 2023 19:10 UTC
39
points
14
comments
26
min read
LW
link
[Question]
What Do AI Safety Pitches Not Get About Your Field?
Aris
22 Sep 2022 21:27 UTC
28
points
3
comments
1
min read
LW
link
A potentially high impact differential technological development area
Noosphere89
8 Jun 2023 14:33 UTC
5
points
2
comments
2
min read
LW
link
Linkpost: ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
DavidW
13 Mar 2023 16:52 UTC
6
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
Why I am not an AI extinction cautionista
Shmi
18 Jun 2023 21:28 UTC
22
points
40
comments
2
min read
LW
link
Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans)
titotal
17 May 2023 11:58 UTC
5
points
3
comments
1
min read
LW
link
BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism?
Peter Berggren
6 Jul 2023 17:32 UTC
17
points
6
comments
2
min read
LW
link
Linkpost: A tale of 2.5 orthogonality theses
DavidW
13 Mar 2023 14:19 UTC
9
points
3
comments
1
min read
LW
link
(forum.effectivealtruism.org)
Deconstructing Bostrom’s Classic Argument for AI Doom
Nora Belrose
11 Mar 2024 5:58 UTC
16
points
14
comments
1
min read
LW
link
(www.youtube.com)
Blake Richards on Why he is Skeptical of Existential Risk from AI
Michaël Trazzi
14 Jun 2022 19:09 UTC
41
points
12
comments
4
min read
LW
link
(theinsideview.ai)
[Link] Sarah Constantin: “Why I am Not An AI Doomer”
lbThingrb
12 Apr 2023 1:52 UTC
61
points
13
comments
1
min read
LW
link
(sarahconstantin.substack.com)
Notes on “the hot mess theory of AI misalignment”
JakubK
21 Apr 2023 10:07 UTC
13
points
0
comments
5
min read
LW
link
(sohl-dickstein.github.io)
Linkpost: A Contra AI FOOM Reading List
DavidW
13 Mar 2023 14:45 UTC
25
points
4
comments
1
min read
LW
link
(magnusvinding.com)
No comments.
Back to top