Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
How To Get Into Independent Research On Alignment/Agency
johnswentworth
Nov 19, 2021, 12:00 AM
356
points
38
comments
13
min read
LW
link
2
reviews
Frame Control
Aella
Nov 27, 2021, 10:59 PM
329
points
283
comments
23
min read
LW
link
2
reviews
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
and
Eliezer Yudkowsky
Nov 11, 2021, 3:01 AM
328
points
253
comments
34
min read
LW
link
1
review
Feature Selection
Zack_M_Davis
Nov 1, 2021, 12:22 AM
322
points
24
comments
16
min read
LW
link
1
review
Study Guide
johnswentworth
Nov 6, 2021, 1:23 AM
302
points
50
comments
16
min read
LW
link
EfficientZero: How It Works
1a3orn
Nov 26, 2021, 3:17 PM
299
points
50
comments
29
min read
LW
link
1
review
A Brief Introduction to Container Logistics
Vitor
Nov 11, 2021, 3:58 PM
267
points
22
comments
11
min read
LW
link
1
review
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist
Nov 26, 2021, 11:08 PM
260
points
31
comments
31
min read
LW
link
2
reviews
Omicron Variant Post #1: We’re F***ed, It’s Never Over
Zvi
Nov 26, 2021, 7:00 PM
260
points
95
comments
18
min read
LW
link
(thezvi.wordpress.com)
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky
and
Richard_Ngo
Nov 15, 2021, 8:31 PM
259
points
151
comments
99
min read
LW
link
1
review
Concentration of Force
Duncan Sabien (Inactive)
Nov 6, 2021, 8:20 AM
245
points
23
comments
12
min read
LW
link
1
review
Yudkowsky and Christiano discuss “Takeoff Speeds”
Eliezer Yudkowsky
Nov 22, 2021, 7:35 PM
210
points
176
comments
60
min read
LW
link
1
review
Almost everyone should be less afraid of lawsuits
alyssavance
Nov 27, 2021, 2:06 AM
198
points
18
comments
5
min read
LW
link
2
reviews
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Zvi
Nov 15, 2021, 3:50 AM
197
points
49
comments
16
min read
LW
link
(thezvi.wordpress.com)
Speaking of Stag Hunts
Duncan Sabien (Inactive)
Nov 6, 2021, 8:20 AM
191
points
373
comments
18
min read
LW
link
Split and Commit
Duncan Sabien (Inactive)
Nov 21, 2021, 6:27 AM
188
points
34
comments
7
min read
LW
link
1
review
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Owain_Evans
Nov 28, 2021, 8:17 PM
186
points
32
comments
3
min read
LW
link
1
review
Preprint is out! 100,000 lumens to treat seasonal affective disorder
Fabienne
Nov 12, 2021, 5:59 PM
170
points
10
comments
1
min read
LW
link
You are probably underestimating how good self-love can be
Charlie Rogers-Smith
Nov 14, 2021, 12:41 AM
168
points
19
comments
12
min read
LW
link
1
review
The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century
jasoncrawford
Nov 28, 2021, 5:22 PM
151
points
36
comments
15
min read
LW
link
1
review
(rootsofprogress.org)
App and book recommendations for people who want to be happier and more productive
KatWoods
Nov 6, 2021, 5:40 PM
142
points
43
comments
8
min read
LW
link
Comments on Carlsmith’s “Is power-seeking AI an existential risk?”
So8res
Nov 13, 2021, 4:29 AM
139
points
15
comments
40
min read
LW
link
1
review
EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
gwern
Nov 2, 2021, 2:32 AM
137
points
52
comments
1
min read
LW
link
(arxiv.org)
How do we become confident in the safety of a machine learning system?
evhub
Nov 8, 2021, 10:49 PM
134
points
5
comments
31
min read
LW
link
Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky
and
Richard_Ngo
Nov 18, 2021, 10:19 PM
131
points
61
comments
39
min read
LW
link
1
review
Sci-Hub sued in India
Connor_Flexman
Nov 13, 2021, 11:12 PM
131
points
19
comments
7
min read
LW
link
Transcript: “You Should Read HPMOR”
TurnTrout
Nov 2, 2021, 6:20 PM
124
points
12
comments
5
min read
LW
link
1
review
Soares, Tallinn, and Yudkowsky discuss AGI cognition
So8res
,
Eliezer Yudkowsky
and
jaan
Nov 29, 2021, 7:26 PM
121
points
39
comments
40
min read
LW
link
1
review
Omicron Variant Post #2
Zvi
Nov 29, 2021, 4:30 PM
120
points
34
comments
14
min read
LW
link
(thezvi.wordpress.com)
Christiano, Cotra, and Yudkowsky on AI progress
Eliezer Yudkowsky
and
Ajeya Cotra
Nov 25, 2021, 4:45 PM
119
points
95
comments
66
min read
LW
link
Why I’m excited about Redwood Research’s current project
paulfchristiano
Nov 12, 2021, 7:26 PM
114
points
6
comments
7
min read
LW
link
The Maker of MIND
Tomás B.
Nov 20, 2021, 4:28 PM
112
points
19
comments
11
min read
LW
link
Effective Evil
lsusr
Nov 2, 2021, 12:26 AM
111
points
7
comments
3
min read
LW
link
Where did the 5 micron number come from? Nowhere good. [Wired.com]
Elizabeth
Nov 9, 2021, 7:14 AM
108
points
8
comments
1
min read
LW
link
1
review
(www.wired.com)
Improving on the Karma System
Raelifin
Nov 14, 2021, 6:01 PM
106
points
36
comments
19
min read
LW
link
Money Stuff
Jacob Falkovich
Nov 1, 2021, 4:08 PM
103
points
18
comments
7
min read
LW
link
Rapid Increase of Highly Mutated B.1.1.529 Strain in South Africa
dawangy
Nov 26, 2021, 1:05 AM
103
points
15
comments
1
min read
LW
link
Coordination Skills I Wish I Had For the Pandemic
Raemon
Nov 13, 2021, 11:32 PM
96
points
9
comments
6
min read
LW
link
1
review
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka
and
Buck
Nov 3, 2021, 6:22 PM
95
points
4
comments
1
min read
LW
link
[Book Review] “The Bell Curve” by Charles Murray
lsusr
Nov 2, 2021, 5:49 AM
94
points
134
comments
23
min read
LW
link
[Book Review] “Sorceror’s Apprentice” by Tahir Shah
lsusr
Nov 20, 2021, 11:29 AM
92
points
11
comments
7
min read
LW
link
Comments on OpenPhil’s Interpretability RFP
paulfchristiano
Nov 5, 2021, 10:36 PM
91
points
5
comments
7
min read
LW
link
AI Safety Needs Great Engineers
Andy Jones
Nov 23, 2021, 3:40 PM
90
points
43
comments
4
min read
LW
link
A Bayesian Aggregation Paradox
Jsevillamol
Nov 22, 2021, 10:39 AM
87
points
23
comments
7
min read
LW
link
Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability
TurnTrout
Nov 18, 2021, 1:54 AM
85
points
8
comments
17
min read
LW
link
(www.overleaf.com)
Transcript for Geoff Anders and Anna Salamon’s Oct. 23 conversation
Rob Bensinger
Nov 8, 2021, 2:19 AM
83
points
97
comments
58
min read
LW
link
A positive case for how we might succeed at prosaic AI alignment
evhub
Nov 16, 2021, 1:49 AM
81
points
46
comments
6
min read
LW
link
What would we do if alignment were futile?
Grant Demaree
Nov 14, 2021, 8:09 AM
75
points
39
comments
3
min read
LW
link
[Question]
Worst Commonsense Concepts?
abramdemski
Nov 15, 2021, 6:22 PM
75
points
34
comments
3
min read
LW
link
Covid 11/25: Another Thanksgiving
Zvi
Nov 25, 2021, 1:40 PM
73
points
9
comments
21
min read
LW
link
(thezvi.wordpress.com)
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel