Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
Inner alignment requires making assumptions about human values
Matthew Barnett
Jan 20, 2020, 6:38 PM
26
points
9
comments
4
min read
LW
link
Workshop on Assured Autonomous Systems (WAAS)
Aryeh Englander
Jan 20, 2020, 4:21 PM
2
points
0
comments
1
min read
LW
link
Why Do You Keep Having This Problem?
Davis_Kingsley
Jan 20, 2020, 8:33 AM
47
points
16
comments
1
min read
LW
link
[Question]
Use-cases for computations, other than running them?
johnswentworth
Jan 19, 2020, 8:52 PM
30
points
6
comments
2
min read
LW
link
UML VII: Meta-Learning
Rafael Harth
Jan 19, 2020, 6:23 PM
14
points
0
comments
15
min read
LW
link
Adjusting Outdoor Reset
jefftk
Jan 19, 2020, 6:20 PM
1
point
0
comments
1
min read
LW
link
(www.jefftk.com)
Madison SSC Meetup: Adversarial Collaborations
marywang
Jan 19, 2020, 4:47 PM
1
point
0
comments
1
min read
LW
link
Book review: Human Compatible
PeterMcCluskey
Jan 19, 2020, 3:32 AM
37
points
2
comments
5
min read
LW
link
(www.bayesianinvestor.com)
Is NYC Building Much Housing?
jefftk
Jan 18, 2020, 8:50 PM
0
points
0
comments
1
min read
LW
link
(www.jefftk.com)
The Road to Mazedom
Zvi
Jan 18, 2020, 2:10 PM
97
points
26
comments
7
min read
LW
link
2
reviews
(thezvi.wordpress.com)
[Question]
What types of compute/processing could we distinguish?
MoritzG
Jan 18, 2020, 10:04 AM
2
points
9
comments
1
min read
LW
link
[Question]
Political Roko’s basilisk
Abhimanyu Pallavi Sudhir
Jan 18, 2020, 9:34 AM
10
points
10
comments
1
min read
LW
link
Risk and uncertainty: A false dichotomy?
MichaelA
Jan 18, 2020, 3:09 AM
6
points
9
comments
20
min read
LW
link
Remote AI alignment writing group seeking new members
rmoehn
Jan 18, 2020, 2:10 AM
11
points
0
comments
1
min read
LW
link
“How quickly can you get this done?” (estimating workload)
kerspoon
Jan 18, 2020, 12:10 AM
15
points
9
comments
4
min read
LW
link
Studying Early Stage Science: Research Program Introduction
habryka
Jan 17, 2020, 10:12 PM
32
points
1
comment
15
min read
LW
link
(medium.com)
Fiddle Effects Tech
jefftk
Jan 17, 2020, 5:00 PM
2
points
0
comments
1
min read
LW
link
(www.jefftk.com)
[Question]
How does a Living Being solve the problem of Subsystem Alignment?
Alan Givré
Jan 17, 2020, 9:32 AM
3
points
7
comments
1
min read
LW
link
Can we always assign, and make sense of, subjective probabilities?
MichaelA
Jan 17, 2020, 3:05 AM
11
points
15
comments
13
min read
LW
link
Against Rationalization II: Sequence Recap
dspeyer
Jan 16, 2020, 10:51 PM
6
points
2
comments
1
min read
LW
link
Using Expert Disagreement
dspeyer
Jan 16, 2020, 10:42 PM
13
points
1
comment
5
min read
LW
link
Bay Solstice 2019 Retrospective
mingyuan
Jan 16, 2020, 5:15 PM
75
points
36
comments
15
min read
LW
link
Reality-Revealing and Reality-Masking Puzzles
AnnaSalamon
Jan 16, 2020, 4:15 PM
264
points
57
comments
13
min read
LW
link
1
review
How to Escape From Immoral Mazes
Zvi
Jan 16, 2020, 1:10 PM
80
points
21
comments
19
min read
LW
link
1
review
(thezvi.wordpress.com)
Testing for Rationalization
dspeyer
Jan 16, 2020, 8:12 AM
19
points
0
comments
2
min read
LW
link
[Question]
How useful do you think participating to the Human Microbiome Project would be?
Mati_Roy
Jan 15, 2020, 11:51 PM
4
points
0
comments
1
min read
LW
link
The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs
Gentzel
Jan 15, 2020, 11:10 PM
30
points
4
comments
3
min read
LW
link
(theconsequentialist.wordpress.com)
In Defense of the Arms Races… that End Arms Races
Gentzel
Jan 15, 2020, 9:30 PM
38
points
9
comments
3
min read
LW
link
(theconsequentialist.wordpress.com)
Fire Alarm for AGI
user134723
Jan 15, 2020, 8:41 PM
1
point
0
comments
1
min read
LW
link
(blog.acolyer.org)
Go F*** Someone
Jacob Falkovich
Jan 15, 2020, 6:39 PM
19
points
23
comments
8
min read
LW
link
[AN #82]: How OpenAI Five distributed their training computation
Rohin Shah
Jan 15, 2020, 6:20 PM
19
points
0
comments
8
min read
LW
link
(mailchi.mp)
ACDT: a hack-y acausal decision theory
Stuart_Armstrong
Jan 15, 2020, 5:22 PM
50
points
16
comments
7
min read
LW
link
Nashville January 2020 SSC Meetup
friedelcraftiness
Jan 15, 2020, 5:02 PM
1
point
0
comments
1
min read
LW
link
In defense of deviousness
Juan Andrés Hurtado Baeza
Jan 15, 2020, 11:56 AM
12
points
8
comments
4
min read
LW
link
(medium.com)
[Question]
What plausible beliefs do you think could likely get someone diagnosed with a mental illness by a psychiatrist?
Mati_Roy
Jan 15, 2020, 11:13 AM
4
points
6
comments
LW
link
Avoiding Rationalization
dspeyer
Jan 15, 2020, 10:55 AM
15
points
0
comments
2
min read
LW
link
SSC Dublin Meetup
Dan Valentine
Jan 15, 2020, 8:26 AM
1
point
0
comments
1
min read
LW
link
[Question]
What are beliefs you wouldn’t want (or would feel apprehensive about being) public if you had (or have) them?
Mati_Roy
Jan 15, 2020, 5:30 AM
6
points
17
comments
1
min read
LW
link
Reno SSC: Visitors from Out of Town
RenoSSC
Jan 15, 2020, 4:35 AM
1
point
0
comments
1
min read
LW
link
Artificial Intelligence and Life Sciences (Why Big Data is not enough to capture biological systems?)
HansNauj
Jan 15, 2020, 1:59 AM
6
points
3
comments
6
min read
LW
link
SSC HIKE—BLACK MOUNTAIN
crewman 51
Jan 15, 2020, 12:18 AM
1
point
0
comments
1
min read
LW
link
Clarifying The Malignity of the Universal Prior: The Lexical Update
interstice
Jan 15, 2020, 12:00 AM
20
points
2
comments
3
min read
LW
link
[Question]
Tips on how to promote effective altruism effectively? Less talk, more action.
culturechange
Jan 14, 2020, 11:17 PM
3
points
1
comment
1
min read
LW
link
A rant against robots
Lê Nguyên Hoang
Jan 14, 2020, 10:03 PM
65
points
7
comments
5
min read
LW
link
Is backwards causation necessarily absurd?
Chris_Leong
Jan 14, 2020, 7:25 PM
22
points
9
comments
1
min read
LW
link
Predictors exist: CDT going bonkers… forever
Stuart_Armstrong
Jan 14, 2020, 4:19 PM
46
points
31
comments
1
min read
LW
link
Austin LW/SSC Far-comers Meetup: Feb. 8, 1:30pm
jchan
Jan 14, 2020, 2:46 PM
2
points
1
comment
1
min read
LW
link
Red Flags for Rationalization
dspeyer
Jan 14, 2020, 7:34 AM
25
points
6
comments
4
min read
LW
link
Advanced Anki (Memorization Software)
Arthur Milchior
Jan 14, 2020, 2:25 AM
4
points
0
comments
1
min read
LW
link
Anki (Memorization Software) for Beginners
Arthur Milchior
Jan 14, 2020, 1:55 AM
7
points
5
comments
1
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel