Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
2
The Rational Utilitarian Love Movement (A Historical Retrospective)
Caleb Biddulph
Nov 3, 2022, 7:11 AM
3
points
0
comments
LW
link
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter
mako yass
Nov 3, 2022, 6:47 AM
30
points
13
comments
10
min read
LW
link
Open Letter Against Reckless Nuclear Escalation and Use
Max Tegmark
Nov 3, 2022, 5:34 AM
27
points
25
comments
1
min read
LW
link
Lazy Python Argument Parsing
jefftk
Nov 3, 2022, 2:20 AM
20
points
3
comments
1
min read
LW
link
(www.jefftk.com)
AI as a Civilizational Risk Part 5/6: Relationship between C-risk and X-risk
PashaKamyshev
Nov 3, 2022, 2:19 AM
2
points
0
comments
7
min read
LW
link
[Question]
Is there a good way to award a fixed prize in a prediction contest?
jchan
Nov 2, 2022, 9:37 PM
18
points
5
comments
1
min read
LW
link
“Are Experiments Possible?” Seeds of Science call for reviewers
rogersbacon
Nov 2, 2022, 8:05 PM
8
points
0
comments
1
min read
LW
link
Humans do acausal coordination all the time
Adam Jermyn
Nov 2, 2022, 2:40 PM
57
points
35
comments
3
min read
LW
link
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm)
Davidmanheim
Nov 2, 2022, 12:57 PM
73
points
27
comments
4
min read
LW
link
(twitter.com)
Housing and Transit Thoughts #1
Zvi
Nov 2, 2022, 12:10 PM
35
points
5
comments
16
min read
LW
link
(thezvi.wordpress.com)
Mind is uncountable
Filip Sondej
Nov 2, 2022, 11:51 AM
18
points
22
comments
LW
link
AI Safety Needs Great Product Builders
goodgravy
Nov 2, 2022, 11:33 AM
14
points
2
comments
LW
link
Why is fiber good for you?
braces
Nov 2, 2022, 2:04 AM
18
points
2
comments
2
min read
LW
link
Information Markets
eva_
Nov 2, 2022, 1:24 AM
46
points
6
comments
12
min read
LW
link
Sequence Reread: Fake Beliefs [plus sequence spotlight meta]
Raemon
Nov 2, 2022, 12:09 AM
27
points
3
comments
1
min read
LW
link
Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?
Neel Nanda
Nov 1, 2022, 11:56 PM
69
points
16
comments
1
min read
LW
link
(youtu.be)
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Robert Miles
Nov 1, 2022, 11:23 PM
68
points
105
comments
2
min read
LW
link
[Question]
Which Issues in Conceptual Alignment have been Formalised or Observed (or not)?
ojorgensen
Nov 1, 2022, 10:32 PM
4
points
0
comments
1
min read
LW
link
AI as a Civilizational Risk Part 4/6: Bioweapons and Philosophy of Modification
PashaKamyshev
Nov 1, 2022, 8:50 PM
7
points
1
comment
8
min read
LW
link
Open & Welcome Thread—November 2022
MondSemmel
Nov 1, 2022, 6:47 PM
14
points
46
comments
1
min read
LW
link
Mildly Against Donor Lotteries
jefftk
Nov 1, 2022, 6:10 PM
10
points
9
comments
3
min read
LW
link
(www.jefftk.com)
Progress links and tweets, 2022-11-01
jasoncrawford
Nov 1, 2022, 5:48 PM
16
points
4
comments
3
min read
LW
link
(rootsofprogress.org)
On the correspondence between AI-misalignment and cognitive dissonance using a behavioral economics model
Stijn Bruers
Nov 1, 2022, 5:39 PM
4
points
0
comments
6
min read
LW
link
Threat Model Literature Review
zac_kenton
,
Rohin Shah
,
David Lindner
,
Vikrant Varma
,
Vika
,
Mary Phuong
,
Ramana Kumar
and
Elliot Catt
Nov 1, 2022, 11:03 AM
78
points
4
comments
25
min read
LW
link
Clarifying AI X-risk
zac_kenton
,
Rohin Shah
,
David Lindner
,
Vikrant Varma
,
Vika
,
Mary Phuong
,
Ramana Kumar
and
Elliot Catt
Nov 1, 2022, 11:03 AM
127
points
24
comments
4
min read
LW
link
1
review
Auditing games for high-level interpretability
Paul Colognese
Nov 1, 2022, 10:44 AM
33
points
1
comment
7
min read
LW
link
Remember to translate your thoughts back again
brook
Nov 1, 2022, 8:49 AM
25
points
11
comments
3
min read
LW
link
(forum.effectivealtruism.org)
Conversations on Alcohol Consumption
Annapurna
Nov 1, 2022, 5:09 AM
20
points
6
comments
9
min read
LW
link
ML Safety Scholars Summer 2022 Retrospective
TW123
Nov 1, 2022, 3:09 AM
29
points
0
comments
LW
link
EA & LW Forums Weekly Summary (24 − 30th Oct 22′)
Zoe Williams
Nov 1, 2022, 2:58 AM
13
points
1
comment
LW
link
Caution when interpreting Deepmind’s In-context RL paper
Sam Marks
Nov 1, 2022, 2:42 AM
105
points
8
comments
4
min read
LW
link
What sorts of systems can be deceptive?
Andrei Alexandru
Oct 31, 2022, 10:00 PM
16
points
0
comments
7
min read
LW
link
“Cars and Elephants”: a handwavy argument/analogy against mechanistic interpretability
David Scott Krueger (formerly: capybaralet)
Oct 31, 2022, 9:26 PM
51
points
25
comments
2
min read
LW
link
Superintelligent AI is necessary for an amazing future, but far from sufficient
So8res
Oct 31, 2022, 9:16 PM
132
points
48
comments
34
min read
LW
link
Sanity-checking in an age of hyperbole
Ciprian Elliu Ivanof
Oct 31, 2022, 8:04 PM
2
points
4
comments
2
min read
LW
link
Why Aren’t There More Schelling Holidays?
johnswentworth
Oct 31, 2022, 7:31 PM
63
points
21
comments
1
min read
LW
link
The circular problem of epistemic irresponsibility
Roman Leventov
Oct 31, 2022, 5:23 PM
5
points
2
comments
8
min read
LW
link
AI as a Civilizational Risk Part 3/6: Anti-economy and Signal Pollution
PashaKamyshev
Oct 31, 2022, 5:03 PM
7
points
4
comments
14
min read
LW
link
Average utilitarianism is non-local
Yair Halberstadt
Oct 31, 2022, 4:36 PM
29
points
13
comments
1
min read
LW
link
Marvel Snap: Phase 1
Zvi
Oct 31, 2022, 3:20 PM
23
points
1
comment
14
min read
LW
link
(thezvi.wordpress.com)
Boundaries vs Frames
Scott Garrabrant
Oct 31, 2022, 3:14 PM
58
points
10
comments
7
min read
LW
link
Embedding safety in ML development
zeshen
Oct 31, 2022, 12:27 PM
24
points
1
comment
18
min read
LW
link
[Book] Interpretable Machine Learning: A Guide for Making Black Box Models Explainable
Esben Kran
Oct 31, 2022, 11:38 AM
20
points
1
comment
1
min read
LW
link
(christophm.github.io)
My (naive) take on Risks from Learned Optimization
Artyom Karpov
Oct 31, 2022, 10:59 AM
7
points
0
comments
5
min read
LW
link
Tactical Nuclear Weapons Aren’t Cost-Effective Compared to Precision Artillery
Lao Mein
Oct 31, 2022, 4:33 AM
28
points
7
comments
3
min read
LW
link
Gandalf or Saruman? A Soldier in Scout’s Clothing
DirectedEvolution
Oct 31, 2022, 2:40 AM
41
points
1
comment
4
min read
LW
link
Me (Steve Byrnes) on the “Brain Inspired” podcast
Steven Byrnes
Oct 30, 2022, 7:15 PM
26
points
1
comment
1
min read
LW
link
(braininspired.co)
“Normal” is the equilibrium state of past optimization processes
Alex_Altair
Oct 30, 2022, 7:03 PM
82
points
5
comments
5
min read
LW
link
AI as a Civilizational Risk Part 2/6: Behavioral Modification
PashaKamyshev
Oct 30, 2022, 4:57 PM
9
points
0
comments
10
min read
LW
link
Instrumental ignoring AI, Dumb but not useless.
Donald Hobson
Oct 30, 2022, 4:55 PM
7
points
6
comments
2
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel