Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
2
AGI safety from first principles: Alignment
Richard_Ngo
Oct 1, 2020, 3:13 AM
60
points
3
comments
13
min read
LW
link
How to not be an alarmist
DirectedEvolution
Sep 30, 2020, 9:35 PM
8
points
2
comments
2
min read
LW
link
[Question]
Competence vs Alignment
kwiat.dev
Sep 30, 2020, 9:03 PM
7
points
4
comments
1
min read
LW
link
“Zero Sum” is a misnomer.
abramdemski
Sep 30, 2020, 6:25 PM
120
points
34
comments
6
min read
LW
link
Evaluating Life Extension Advocacy Foundation
emanuele ascani
Sep 30, 2020, 6:04 PM
7
points
7
comments
5
min read
LW
link
[AN #119]: AI safety when agents are shaped by environments, not rewards
Rohin Shah
Sep 30, 2020, 5:10 PM
11
points
0
comments
11
min read
LW
link
(mailchi.mp)
Learning how to learn
Neel Nanda
Sep 30, 2020, 4:50 PM
44
points
0
comments
15
min read
LW
link
(www.neelnanda.io)
Industrial literacy
jasoncrawford
Sep 30, 2020, 4:39 PM
308
points
130
comments
3
min read
LW
link
(rootsofprogress.org)
Jason Crawford on the non-linear model of innovation: SSC Online Meetup
JoshuaFox
Sep 30, 2020, 10:13 AM
7
points
1
comment
1
min read
LW
link
Holy Grails of Chemistry
chemslug
Sep 30, 2020, 2:03 AM
34
points
2
comments
1
min read
LW
link
“Unsupervised” translation as an (intent) alignment problem
paulfchristiano
Sep 30, 2020, 12:50 AM
62
points
15
comments
4
min read
LW
link
(ai-alignment.com)
[Question]
Examples of self-governance to reduce technology risk?
Jia
Sep 29, 2020, 7:31 PM
10
points
4
comments
1
min read
LW
link
AGI safety from first principles: Goals and Agency
Richard_Ngo
Sep 29, 2020, 7:06 PM
77
points
15
comments
15
min read
LW
link
Seek Upside Risk
Neel Nanda
Sep 29, 2020, 4:47 PM
20
points
6
comments
9
min read
LW
link
(www.neelnanda.io)
Doing discourse better: Stuff I wish I knew
dynomight
Sep 29, 2020, 2:34 PM
27
points
11
comments
1
min read
LW
link
(dyno-might.github.io)
David Friedman on Legal Systems Very Different from Ours: SlateStarCodex Online Meetup
JoshuaFox
Sep 29, 2020, 11:18 AM
10
points
1
comment
1
min read
LW
link
Reading Discussion Group
NoSignalNoNoise
Sep 29, 2020, 3:59 AM
6
points
0
comments
1
min read
LW
link
Cambridge
Virtual LW/SSC Meetup
NoSignalNoNoise
Sep 29, 2020, 3:42 AM
6
points
0
comments
1
min read
LW
link
AGI safety from first principles: Superintelligence
Richard_Ngo
Sep 28, 2020, 7:53 PM
87
points
7
comments
9
min read
LW
link
AGI safety from first principles: Introduction
Richard_Ngo
Sep 28, 2020, 7:53 PM
128
points
18
comments
2
min read
LW
link
1
review
[Question]
is scope insensitivity really a brain error?
Kaarlo Tuomi
Sep 28, 2020, 6:37 PM
4
points
15
comments
1
min read
LW
link
[Question]
What Decision Theory is Implied By Predictive Processing?
johnswentworth
Sep 28, 2020, 5:20 PM
56
points
17
comments
1
min read
LW
link
[Question]
What are examples of Rationalist fable-like stories?
Mati_Roy
Sep 28, 2020, 4:52 PM
19
points
42
comments
1
min read
LW
link
Macro-Procrastination
Neel Nanda
Sep 28, 2020, 4:07 PM
9
points
0
comments
9
min read
LW
link
(www.neelnanda.io)
[Question]
What are good ice breaker questions for meeting people in this community?
Mati_Roy
Sep 28, 2020, 3:07 PM
9
points
2
comments
1
min read
LW
link
On Destroying the World
Chris_Leong
Sep 28, 2020, 7:38 AM
82
points
86
comments
5
min read
LW
link
“Win First” vs “Chill First”
lionhearted (Sebastian Marshall)
Sep 28, 2020, 6:48 AM
101
points
20
comments
3
min read
LW
link
On “Not Screwing Up Ritual Candles”
Raemon
Sep 27, 2020, 9:55 PM
48
points
7
comments
3
min read
LW
link
[Question]
What to do with imitation humans, other than asking them what the right thing to do is?
Charlie Steiner
Sep 27, 2020, 9:51 PM
10
points
6
comments
1
min read
LW
link
[Question]
What are good rationality exercises?
Ben Pace
Sep 27, 2020, 9:25 PM
54
points
25
comments
1
min read
LW
link
1
review
Puzzle Games
Scott Garrabrant
Sep 27, 2020, 9:14 PM
57
points
70
comments
7
min read
LW
link
[Question]
What hard science fiction stories also got the social sciences right?
Mati_Roy
Sep 27, 2020, 8:37 PM
15
points
30
comments
1
min read
LW
link
Tips for the most immersive video calls
benkuhn
Sep 27, 2020, 8:36 PM
65
points
9
comments
15
min read
LW
link
(www.benkuhn.net)
A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments
Søren Elverlin
Sep 27, 2020, 5:51 PM
17
points
6
comments
1
min read
LW
link
Not all communication is manipulation: Chaperones don’t manipulate proteins
ChristianKl
Sep 27, 2020, 4:45 PM
35
points
14
comments
2
min read
LW
link
Numeracy neglect—A personal postmortem
vlad.proex
Sep 27, 2020, 3:12 PM
81
points
29
comments
9
min read
LW
link
Sander Verhaegh on Quine’s Naturalism
Chris_Leong
Sep 27, 2020, 10:58 AM
8
points
0
comments
1
min read
LW
link
(www.sanderverhaegh.nl)
The whirlpool of reality
Gordon Seidoh Worley
Sep 27, 2020, 2:36 AM
9
points
2
comments
2
min read
LW
link
Blog posts as epistemic trust builders
Adam Zerner
Sep 27, 2020, 1:47 AM
18
points
7
comments
2
min read
LW
link
Distributed public goods provision
paulfchristiano
Sep 26, 2020, 9:20 PM
27
points
3
comments
5
min read
LW
link
(sideways-view.com)
Notebook for generating forecasting bets
Amandango
Sep 26, 2020, 8:36 PM
25
points
0
comments
1
min read
LW
link
Surviving Petrov Day
Mati_Roy
Sep 26, 2020, 4:40 PM
35
points
8
comments
2
min read
LW
link
Petrov Day is not about unilateral action
sen
Sep 26, 2020, 1:41 PM
25
points
0
comments
1
min read
LW
link
[Question]
What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to?
philip_b
Sep 26, 2020, 9:15 AM
7
points
11
comments
2
min read
LW
link
Honoring Petrov Day on LessWrong, in 2020
Ben Pace
Sep 26, 2020, 8:01 AM
117
points
100
comments
3
min read
LW
link
[Link] Where did you get that idea in the first place? | Meaningness
Kenny
Sep 25, 2020, 3:38 PM
7
points
4
comments
1
min read
LW
link
Up close and personal with the world
dominicq
Sep 25, 2020, 6:52 AM
12
points
2
comments
1
min read
LW
link
Human Biases that Obscure AI Progress
Danielle Ensign
25 Sep 2020 0:24 UTC
42
points
2
comments
4
min read
LW
link
Losing the forest for the trees with grid drawings
Adam Zerner
24 Sep 2020 21:13 UTC
19
points
1
comment
2
min read
LW
link
Petrov Event Roundup 2020
Raemon
24 Sep 2020 21:07 UTC
43
points
0
comments
1
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel