Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Sam Clarke
Karma:
469
All
Posts
Comments
New
Top
Old
Deference on AI timelines: survey results
Sam Clarke
and
mccaffary
30 Mar 2023 23:03 UTC
25
points
4
comments
2
min read
LW
link
When reporting AI timelines, be clear who you’re deferring to
Sam Clarke
10 Oct 2022 14:24 UTC
38
points
6
comments
1
min read
LW
link
Sam Clarke’s Shortform
Sam Clarke
7 Oct 2021 14:29 UTC
4
points
6
comments
1
min read
LW
link
[Question]
Collection of arguments to expect (outer and inner) alignment failure?
Sam Clarke
28 Sep 2021 16:55 UTC
21
points
10
comments
1
min read
LW
link
Distinguishing AI takeover scenarios
Sam Clarke
and
Sammy Martin
8 Sep 2021 16:19 UTC
74
points
11
comments
14
min read
LW
link
Survey on AI existential risk scenarios
Sam Clarke
,
apc
and
Jonas Schuett
8 Jun 2021 17:12 UTC
65
points
11
comments
7
min read
LW
link
[Question]
What are the biggest current impacts of AI?
Sam Clarke
7 Mar 2021 21:44 UTC
15
points
5
comments
1
min read
LW
link
Clarifying “What failure looks like”
Sam Clarke
20 Sep 2020 20:40 UTC
97
points
14
comments
17
min read
LW
link
Back to top