Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Prometheus
Karma:
474
All
Posts
Comments
New
Top
Old
[Question]
Why do so many think deception in AI is important?
Prometheus
13 Jan 2024 8:14 UTC
23
points
12
comments
1
min read
LW
link
Back to the Past to the Future
Prometheus
18 Oct 2023 16:51 UTC
5
points
0
comments
1
min read
LW
link
[Question]
Why aren’t more people in AIS familiar with PDP?
Prometheus
1 Sep 2023 15:27 UTC
4
points
9
comments
1
min read
LW
link
Why Is No One Trying To Align Profit Incentives With Alignment Research?
Prometheus
23 Aug 2023 13:16 UTC
51
points
11
comments
4
min read
LW
link
Slaying the Hydra: toward a new game board for AI
Prometheus
23 Jun 2023 17:04 UTC
0
points
5
comments
6
min read
LW
link
Lightning Post: Things people in AI Safety should stop talking about
Prometheus
20 Jun 2023 15:00 UTC
23
points
6
comments
2
min read
LW
link
Aligned Objectives Prize Competition
Prometheus
15 Jun 2023 12:42 UTC
8
points
0
comments
2
min read
LW
link
(app.impactmarkets.io)
Prometheus’s Shortform
Prometheus
13 Jun 2023 23:21 UTC
3
points
38
comments
1
min read
LW
link
Using Consensus Mechanisms as an approach to Alignment
Prometheus
10 Jun 2023 23:38 UTC
9
points
2
comments
6
min read
LW
link
Humans are not prepared to operate outside their moral training distribution
Prometheus
10 Apr 2023 21:44 UTC
36
points
1
comment
3
min read
LW
link
Widening Overton Window—Open Thread
Prometheus
31 Mar 2023 10:03 UTC
23
points
8
comments
1
min read
LW
link
4 Key Assumptions in AI Safety
Prometheus
7 Nov 2022 10:50 UTC
20
points
5
comments
7
min read
LW
link
Five Areas I Wish EAs Gave More Focus
Prometheus
27 Oct 2022 6:13 UTC
13
points
18
comments
1
min read
LW
link
The Twins
Prometheus
28 Dec 2020 1:26 UTC
3
points
3
comments
6
min read
LW
link
Back to top