Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
2
Hell Must Be Destroyed
algekalipso
Dec 6, 2018, 4:11 AM
33
points
1
comment
4
min read
LW
link
Trivial Inconvenience Day (December 9th at 12 Noon PST)
namespace
Dec 7, 2018, 1:26 AM
32
points
1
comment
3
min read
LW
link
Assuming we’ve solved X, could we do Y...
Stuart_Armstrong
Dec 11, 2018, 6:13 PM
31
points
16
comments
2
min read
LW
link
Systems Engineering and the META Program
ryan_b
Dec 20, 2018, 8:19 PM
31
points
3
comments
1
min read
LW
link
You can be wrong about what you like, and you often are
Adam Zerner
Dec 17, 2018, 11:49 PM
30
points
21
comments
4
min read
LW
link
[Question]
What precisely do we mean by AI alignment?
Gordon Seidoh Worley
Dec 9, 2018, 2:23 AM
30
points
8
comments
1
min read
LW
link
Reinterpreting “AI and Compute”
habryka
Dec 25, 2018, 9:12 PM
30
points
9
comments
1
min read
LW
link
(aiimpacts.org)
Conceptual Analysis for AI Alignment
David Scott Krueger (formerly: capybaralet)
Dec 30, 2018, 12:46 AM
29
points
3
comments
2
min read
LW
link
A hundred Shakespeares
Stuart_Armstrong
Dec 11, 2018, 11:11 PM
29
points
5
comments
2
min read
LW
link
On Disingenuity
Chris_Leong
Dec 26, 2018, 5:08 PM
28
points
3
comments
1
min read
LW
link
[Question]
Can dying people “hold on” for something they are waiting for?
Raemon
Dec 27, 2018, 7:53 PM
28
points
7
comments
1
min read
LW
link
Peanut Butter
Jacob Falkovich
Dec 3, 2018, 7:30 PM
28
points
3
comments
12
min read
LW
link
Book review: Artificial Intelligence Safety and Security
PeterMcCluskey
Dec 8, 2018, 3:47 AM
27
points
3
comments
8
min read
LW
link
(www.bayesianinvestor.com)
Open and Welcome Thread December 2018
Ben Pace
Dec 4, 2018, 10:20 PM
26
points
23
comments
1
min read
LW
link
Testing Rationality Apps for Science
BayesianMind
Dec 24, 2018, 10:46 AM
26
points
1
comment
1
min read
LW
link
Akrasia is confusion about what you want
Gordon Seidoh Worley
Dec 28, 2018, 9:09 PM
26
points
7
comments
9
min read
LW
link
Alignment Newsletter #37
Rohin Shah
Dec 17, 2018, 7:10 PM
25
points
4
comments
10
min read
LW
link
(mailchi.mp)
[Question]
What are some concrete problems about logical counterfactuals?
Chris_Leong
Dec 16, 2018, 10:20 AM
25
points
4
comments
1
min read
LW
link
[Question]
What is abstraction?
Adam Zerner
Dec 15, 2018, 8:36 AM
25
points
11
comments
4
min read
LW
link
Good arguments against “cultural appropriation”
Tyrrell_McAllister
Dec 18, 2018, 5:23 PM
24
points
12
comments
2
min read
LW
link
[Question]
Why should I care about rationality?
TurnTrout
Dec 8, 2018, 3:49 AM
24
points
5
comments
1
min read
LW
link
Interpreting genetic testing
jefftk
Dec 15, 2018, 3:56 PM
24
points
1
comment
2
min read
LW
link
[Question]
Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?
habryka
Dec 8, 2018, 1:49 AM
22
points
13
comments
1
min read
LW
link
Review: Slay the Spire
Zvi
Dec 9, 2018, 8:40 PM
22
points
1
comment
7
min read
LW
link
(thezvi.wordpress.com)
Alignment Newsletter #36
Rohin Shah
Dec 12, 2018, 1:10 AM
21
points
0
comments
11
min read
LW
link
(mailchi.mp)
1987 Sci-Fi Authors Timecapsule Predictions For 2012
namespace
Dec 28, 2018, 6:50 AM
20
points
3
comments
1
min read
LW
link
(web.archive.org)
Penalizing Impact via Attainable Utility Preservation
TurnTrout
Dec 28, 2018, 9:46 PM
20
points
0
comments
3
min read
LW
link
(arxiv.org)
[Question]
In what ways are holidays good?
DanielFilan
Dec 28, 2018, 12:42 AM
20
points
19
comments
1
min read
LW
link
[Question]
Who’s welcome to our LessWrong meetups?
ChristianKl
Dec 10, 2018, 1:31 PM
19
points
5
comments
1
min read
LW
link
Anyone use the “read time” on Post Items?
Raemon
Dec 1, 2018, 11:16 PM
19
points
11
comments
1
min read
LW
link
Card Collection and Ownership
Zvi
Dec 27, 2018, 1:10 PM
19
points
8
comments
16
min read
LW
link
(thezvi.wordpress.com)
Anthropic paradoxes transposed into Anthropic Decision Theory
Stuart_Armstrong
Dec 19, 2018, 6:07 PM
18
points
23
comments
4
min read
LW
link
Is cognitive load a factor in community decline?
ryan_b
Dec 7, 2018, 3:45 PM
18
points
6
comments
1
min read
LW
link
LW Update 2018-12-06 – All Posts Page, Questions Page, Posts Item rework
Raemon
Dec 8, 2018, 9:30 PM
18
points
1
comment
1
min read
LW
link
[Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles)
habryka
Dec 23, 2018, 9:49 PM
17
points
1
comment
9
min read
LW
link
(www.youtube.com)
Figuring out what Alice wants: non-human Alice
Stuart_Armstrong
Dec 11, 2018, 7:31 PM
16
points
17
comments
2
min read
LW
link
Anthropic probabilities and cost functions
Stuart_Armstrong
Dec 21, 2018, 5:54 PM
16
points
1
comment
1
min read
LW
link
COEDT Equilibria in Games
Diffractor
Dec 6, 2018, 6:00 PM
15
points
0
comments
3
min read
LW
link
[Question]
Experiences of Self-deception
Bucky
Dec 18, 2018, 11:10 AM
15
points
3
comments
1
min read
LW
link
Benign model-free RL
paulfchristiano
Dec 2, 2018, 4:10 AM
15
points
1
comment
7
min read
LW
link
[Question]
Best arguments against worrying about AI risk?
Chris_Leong
Dec 23, 2018, 2:57 PM
15
points
16
comments
1
min read
LW
link
Alignment Newsletter #35
Rohin Shah
Dec 4, 2018, 1:10 AM
15
points
0
comments
6
min read
LW
link
(mailchi.mp)
[Question]
Why should EA care about rationality (and vice-versa)?
Gordon Seidoh Worley
Dec 9, 2018, 10:03 PM
14
points
13
comments
1
min read
LW
link
Bounded rationality abounds in models, not explicitly defined
Stuart_Armstrong
Dec 11, 2018, 7:34 PM
14
points
9
comments
1
min read
LW
link
[Question]
What podcasts does the community listen to?
hristovassilev
Dec 14, 2018, 3:40 PM
13
points
6
comments
1
min read
LW
link
Fifteen Things I Learned From Watching a Game of Secret Hitler
Zvi
Dec 17, 2018, 1:40 PM
13
points
6
comments
1
min read
LW
link
(thezvi.wordpress.com)
Equivalence of State Machines and Coroutines
Martin Sustrik
Dec 18, 2018, 4:40 AM
12
points
1
comment
1
min read
LW
link
(250bpm.com)
Sabine “Bee” Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets
Shmi
16 Dec 2018 6:37 UTC
12
points
0
comments
1
min read
LW
link
(backreaction.blogspot.com)
Artifact Embraces Card Balance Changes
Zvi
26 Dec 2018 13:10 UTC
11
points
1
comment
4
min read
LW
link
(thezvi.wordpress.com)
Boston Secular Solstice
jefftk
10 Dec 2018 1:59 UTC
10
points
0
comments
1
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel