Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
otto.barten
Karma:
387
All
Posts
Comments
New
Top
Old
Proposing the Conditional AI Safety Treaty (linkpost TIME)
otto.barten
15 Nov 2024 13:59 UTC
10
points
8
comments
3
min read
LW
link
(time.com)
Announcing the AI Safety Summit Talks with Yoshua Bengio
otto.barten
14 May 2024 12:52 UTC
9
points
1
comment
1
min read
LW
link
What Failure Looks Like is not an existential risk (and alignment is not the solution)
otto.barten
2 Feb 2024 18:59 UTC
13
points
12
comments
9
min read
LW
link
Announcing #AISummitTalks featuring Professor Stuart Russell and many others
otto.barten
24 Oct 2023 10:11 UTC
17
points
1
comment
1
min read
LW
link
AI Regulation May Be More Important Than AI Alignment For Existential Safety
otto.barten
24 Aug 2023 11:41 UTC
65
points
39
comments
5
min read
LW
link
[Crosspost] An AI Pause Is Humanity’s Best Bet For Preventing Extinction (TIME)
otto.barten
24 Jul 2023 10:07 UTC
12
points
0
comments
7
min read
LW
link
(time.com)
[Crosspost] Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure
otto.barten
8 May 2023 14:09 UTC
7
points
0
comments
6
min read
LW
link
(forum.effectivealtruism.org)
[Crosspost] AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results.
otto.barten
4 May 2023 14:09 UTC
5
points
0
comments
9
min read
LW
link
(forum.effectivealtruism.org)
[Crosspost] Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint
otto.barten
19 Apr 2023 11:45 UTC
8
points
0
comments
4
min read
LW
link
(forum.effectivealtruism.org)
Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public
otto.barten
9 Mar 2023 10:47 UTC
14
points
6
comments
4
min read
LW
link
Why Uncontrollable AI Looks More Likely Than Ever
otto.barten
and
Roman_Yampolskiy
8 Mar 2023 15:41 UTC
18
points
0
comments
4
min read
LW
link
(time.com)
Please help us communicate AI xrisk. It could save the world.
otto.barten
4 Jul 2022 21:47 UTC
4
points
7
comments
2
min read
LW
link
Should we postpone AGI until we reach safety?
otto.barten
18 Nov 2020 15:43 UTC
27
points
36
comments
3
min read
LW
link
Help wanted: feedback on research proposals for FHI application
otto.barten
8 Oct 2020 14:42 UTC
2
points
3
comments
1
min read
LW
link
otto.barten’s Shortform
otto.barten
19 Sep 2020 10:34 UTC
1
point
26
comments
1
min read
LW
link
[Question]
Looking for non-AI people to work on AGI risks
otto.barten
30 Dec 2019 20:41 UTC
10
points
5
comments
1
min read
LW
link
Back to top