Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
So8res
Karma:
16,094
All
Posts
Comments
New
Top
Old
Page
1
LessWrong: After Dark, a new side of LessWrong
So8res
1 Apr 2024 22:44 UTC
34
points
5
comments
1
min read
LW
link
Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
So8res
and
Ronny Fernandez
19 Dec 2023 23:39 UTC
40
points
30
comments
25
min read
LW
link
Quick takes on “AI is easy to control”
So8res
2 Dec 2023 22:31 UTC
26
points
49
comments
4
min read
LW
link
Apocalypse insurance, and the hardline libertarian take on AI risk
So8res
28 Nov 2023 2:09 UTC
122
points
38
comments
7
min read
LW
link
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res
24 Nov 2023 17:37 UTC
206
points
83
comments
5
min read
LW
link
How much to update on recent AI governance moves?
habryka
and
So8res
16 Nov 2023 23:46 UTC
112
points
5
comments
29
min read
LW
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
31 Oct 2023 23:54 UTC
169
points
14
comments
10
min read
LW
link
AI as a science, and three obstacles to alignment strategies
So8res
25 Oct 2023 21:00 UTC
183
points
80
comments
11
min read
LW
link
A mind needn’t be curious to reap the benefits of curiosity
So8res
2 Jun 2023 18:00 UTC
78
points
14
comments
1
min read
LW
link
Cosmopolitan values don’t come free
So8res
31 May 2023 15:58 UTC
137
points
83
comments
1
min read
LW
link
Sentience matters
So8res
29 May 2023 21:25 UTC
143
points
96
comments
2
min read
LW
link
Request: stop advancing AI capabilities
So8res
26 May 2023 17:42 UTC
153
points
24
comments
1
min read
LW
link
Would we even want AI to solve all our problems?
So8res
21 Apr 2023 18:04 UTC
97
points
15
comments
2
min read
LW
link
How could you possibly choose what an AI wants?
So8res
19 Apr 2023 17:08 UTC
105
points
19
comments
1
min read
LW
link
But why would the AI kill us?
So8res
17 Apr 2023 18:42 UTC
129
points
95
comments
2
min read
LW
link
Misgeneralization as a misnomer
So8res
6 Apr 2023 20:43 UTC
129
points
22
comments
4
min read
LW
link
If interpretability research goes well, it may get dangerous
So8res
3 Apr 2023 21:48 UTC
200
points
11
comments
2
min read
LW
link
Hooray for stepping out of the limelight
So8res
1 Apr 2023 2:45 UTC
282
points
24
comments
1
min read
LW
link
A rough and incomplete review of some of John Wentworth’s research
So8res
28 Mar 2023 18:52 UTC
175
points
18
comments
18
min read
LW
link
A stylized dialogue on John Wentworth’s claims about markets and optimization
So8res
25 Mar 2023 22:32 UTC
160
points
22
comments
8
min read
LW
link
Back to top
Next