Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
Deck Guide: Burning Drakes
Zvi
Nov 13, 2018, 7:40 PM
8
points
0
comments
12
min read
LW
link
(thezvi.wordpress.com)
Acknowledging Human Preference Types to Support Value Learning
Nandi
Nov 13, 2018, 6:57 PM
34
points
4
comments
9
min read
LW
link
The Steering Problem
paulfchristiano
Nov 13, 2018, 5:14 PM
44
points
12
comments
7
min read
LW
link
Post-Rationality and Rationality, A Dialogue
agilecaveman
Nov 13, 2018, 5:55 AM
2
points
2
comments
10
min read
LW
link
Laughing Away the Little Miseries
Rossin
Nov 13, 2018, 3:31 AM
12
points
7
comments
2
min read
LW
link
Kelly bettors
DanielFilan
Nov 13, 2018, 12:40 AM
24
points
3
comments
10
min read
LW
link
(danielfilan.com)
Wireheading as a Possible Contributor to Civilizational Decline
avturchin
Nov 12, 2018, 8:33 PM
3
points
6
comments
LW
link
(forum.effectivealtruism.org)
Alignment Newsletter #32
Rohin Shah
Nov 12, 2018, 5:20 PM
18
points
0
comments
12
min read
LW
link
(mailchi.mp)
AI development incentive gradients are not uniformly terrible
rk
Nov 12, 2018, 4:27 PM
21
points
12
comments
6
min read
LW
link
What is being?
Andrew Bindon
Nov 12, 2018, 3:33 PM
−14
points
20
comments
7
min read
LW
link
Aligned AI, The Scientist
Shmi
Nov 12, 2018, 6:36 AM
12
points
2
comments
1
min read
LW
link
Combat vs Nurture: Cultural Genesis
Ruby
Nov 12, 2018, 2:11 AM
35
points
12
comments
6
min read
LW
link
Rationality Is Not Systematized Winning
namespace
Nov 11, 2018, 10:05 PM
36
points
20
comments
1
min read
LW
link
(www.thelastrationalist.com)
“She Wanted It”
sarahconstantin
Nov 11, 2018, 10:00 PM
120
points
21
comments
7
min read
LW
link
(srconstantin.wordpress.com)
Future directions for ambitious value learning
Rohin Shah
Nov 11, 2018, 3:53 PM
48
points
9
comments
4
min read
LW
link
Reconciling Left and Right, from the Bottom-Up
jesseduffield
Nov 11, 2018, 8:36 AM
14
points
2
comments
15
min read
LW
link
Competitive Markets as Distributed Backprop
johnswentworth
Nov 10, 2018, 4:47 PM
59
points
10
comments
4
min read
LW
link
1
review
Productivity: Instrumental Rationality
frdk666
Nov 10, 2018, 2:58 PM
6
points
7
comments
1
min read
LW
link
Preface to the sequence on iterated amplification
paulfchristiano
Nov 10, 2018, 1:24 PM
44
points
8
comments
3
min read
LW
link
Specification gaming examples in AI
Samuel Rødal
Nov 10, 2018, 12:00 PM
24
points
6
comments
1
min read
LW
link
(docs.google.com)
Real-time hiring with prediction markets
ryan_b
Nov 9, 2018, 10:10 PM
17
points
9
comments
1
min read
LW
link
Current AI Safety Roles for Software Engineers
ozziegooen
Nov 9, 2018, 8:57 PM
70
points
9
comments
4
min read
LW
link
Model Mis-specification and Inverse Reinforcement Learning
Owain_Evans
and
jsteinhardt
Nov 9, 2018, 3:33 PM
34
points
3
comments
16
min read
LW
link
Open AI: Can we rule out near-term AGI?
ShardPhoenix
Nov 9, 2018, 12:16 PM
13
points
1
comment
1
min read
LW
link
(www.youtube.com)
Prediction-Augmented Evaluation Systems
ozziegooen
Nov 9, 2018, 10:55 AM
44
points
12
comments
8
min read
LW
link
Update the best textbooks on every subject list
ryan_b
Nov 8, 2018, 8:54 PM
93
points
14
comments
1
min read
LW
link
Multi-Agent Overoptimization, and Embedded Agent World Models
Davidmanheim
Nov 8, 2018, 8:33 PM
8
points
3
comments
3
min read
LW
link
Embedded Curiosities
Scott Garrabrant
and
abramdemski
Nov 8, 2018, 2:19 PM
91
points
1
comment
2
min read
LW
link
On first looking into Russell’s History
Richard_Ngo
Nov 8, 2018, 11:20 AM
23
points
6
comments
5
min read
LW
link
(thinkingcomplete.blogspot.com)
Is Copenhagen or Many Worlds true? An experiment. What? Yes.
Jonathanm
Nov 8, 2018, 10:10 AM
11
points
3
comments
1
min read
LW
link
(arxiv.org)
The new Effective Altruism forum just launched
habryka
Nov 8, 2018, 1:59 AM
27
points
6
comments
1
min read
LW
link
What are Universal Inductors, Again?
Diffractor
Nov 7, 2018, 10:32 PM
16
points
0
comments
7
min read
LW
link
Burnout: What it is and how to Treat it.
Elizabeth
Nov 7, 2018, 10:02 PM
51
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
Bayes Questions
Bucky
Nov 7, 2018, 4:54 PM
21
points
13
comments
2
min read
LW
link
Latent Variables and Model Mis-Specification
jsteinhardt
Nov 7, 2018, 2:48 PM
24
points
8
comments
9
min read
LW
link
Paris SSC Meetup
fbreton
Nov 7, 2018, 10:04 AM
1
point
0
comments
1
min read
LW
link
Reframing a Crush: Distilling the “like” out of “like like”
squidious
Nov 7, 2018, 2:50 AM
6
points
0
comments
3
min read
LW
link
(opalsandbonobos.blogspot.com)
Rationality of demonstrating & voting
bfinn
Nov 7, 2018, 12:09 AM
24
points
21
comments
8
min read
LW
link
Triangle SSC Meetup
willbobaggins
Nov 6, 2018, 10:43 PM
1
point
0
comments
LW
link
The Vulnerable World Hypothesis (by Bostrom)
Ben Pace
Nov 6, 2018, 8:05 PM
50
points
17
comments
4
min read
LW
link
(nickbostrom.com)
Subsystem Alignment
abramdemski
and
Scott Garrabrant
Nov 6, 2018, 4:16 PM
102
points
12
comments
1
min read
LW
link
Alignment Newsletter #31
Rohin Shah
Nov 5, 2018, 11:50 PM
17
points
0
comments
12
min read
LW
link
(mailchi.mp)
Octopath Traveler: Spoiler-Free Review
Zvi
Nov 5, 2018, 5:50 PM
11
points
1
comment
14
min read
LW
link
(thezvi.wordpress.com)
Speculations on improving debating
Richard_Ngo
Nov 5, 2018, 4:10 PM
22
points
4
comments
4
min read
LW
link
(thinkingcomplete.blogspot.com)
Boulder Slate Star Codex Meetup
corticalcircuitry
Nov 5, 2018, 3:01 PM
5
points
0
comments
LW
link
Humans can be assigned any values whatsoever…
Stuart_Armstrong
Nov 5, 2018, 2:26 PM
54
points
27
comments
4
min read
LW
link
When does rationality-as-search have nontrivial implications?
nostalgebraist
Nov 4, 2018, 10:42 PM
72
points
12
comments
3
min read
LW
link
Beliefs at different timescales
Nisan
Nov 4, 2018, 8:10 PM
25
points
12
comments
2
min read
LW
link
No Really, Why Aren’t Rationalists Winning?
Sailor Vulcan
Nov 4, 2018, 6:11 PM
40
points
90
comments
5
min read
LW
link
Robust Delegation
abramdemski
and
Scott Garrabrant
Nov 4, 2018, 4:38 PM
116
points
10
comments
1
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel