Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
Brainstorm of things that could force an AI team to burn their lead
So8res
Jul 24, 2022, 11:58 PM
134
points
8
comments
13
min read
LW
link
Finding Skeletons on Rashomon Ridge
David Udell
,
Peter S. Park
and
NickyP
Jul 24, 2022, 10:31 PM
30
points
2
comments
7
min read
LW
link
Gathering Information you won’t use directly is often useful
Johannes C. Mayer
Jul 24, 2022, 9:21 PM
6
points
1
comment
1
min read
LW
link
[Question]
Impact of ” ‘Let’s think step by step’ is all you need”?
yrimon
Jul 24, 2022, 8:59 PM
20
points
2
comments
1
min read
LW
link
The Most Important Century: The Animation
Writer
and
Matthew Barnett
Jul 24, 2022, 8:58 PM
46
points
2
comments
20
min read
LW
link
(youtu.be)
Hiring Programmers in Academia
jefftk
Jul 24, 2022, 8:20 PM
36
points
19
comments
2
min read
LW
link
(www.jefftk.com)
Less Wrong Budapest July 30th Meetup
Richard Horvath
Jul 24, 2022, 7:07 PM
2
points
0
comments
1
min read
LW
link
Relationship between subjective experience and intelligence?
Q Home
Jul 24, 2022, 9:10 AM
5
points
4
comments
9
min read
LW
link
Double Crux
CFAR!Duncan
Jul 24, 2022, 6:34 AM
61
points
9
comments
11
min read
LW
link
Example Meetup Description
Julius
Jul 24, 2022, 5:38 AM
6
points
0
comments
2
min read
LW
link
Eavesdropping on Aliens: A Data Decoding Challenge
anonymousaisafety
Jul 24, 2022, 4:35 AM
44
points
9
comments
4
min read
LW
link
Information theoretic model analysis may not lend much insight, but we may have been doing them wrong!
Garrett Baker
Jul 24, 2022, 12:42 AM
7
points
0
comments
10
min read
LW
link
What’s next for instrumental rationality?
Andrew_Critch
Jul 23, 2022, 10:55 PM
63
points
7
comments
1
min read
LW
link
Easy guide for running a local Rationality meetup
nsokolsky
Jul 23, 2022, 10:52 PM
13
points
1
comment
6
min read
LW
link
Curating “The Epistemic Sequences” (list v.0.1)
Andrew_Critch
Jul 23, 2022, 10:17 PM
65
points
12
comments
7
min read
LW
link
Room Opening
jefftk
Jul 23, 2022, 9:00 PM
8
points
3
comments
4
min read
LW
link
(www.jefftk.com)
A Bias Against Altruism
Lone Pine
Jul 23, 2022, 8:44 PM
58
points
30
comments
2
min read
LW
link
What Environment Properties Select Agents For World-Modeling?
Thane Ruthenis
Jul 23, 2022, 7:27 PM
25
points
1
comment
12
min read
LW
link
Which singularity schools plus the no singularity school was right?
Noosphere89
Jul 23, 2022, 3:16 PM
9
points
26
comments
9
min read
LW
link
Basic Post Scarcity Q&A
lorepieri
Jul 23, 2022, 1:43 PM
3
points
0
comments
1
min read
LW
link
(lorenzopieri.com)
Robustness to Scaling Down: More Important Than I Thought
adamShimi
Jul 23, 2022, 11:40 AM
38
points
5
comments
3
min read
LW
link
Eating Boogers
George3d6
Jul 23, 2022, 11:20 AM
17
points
5
comments
6
min read
LW
link
(www.epistem.ink)
On Akrasia, Habits and Reward Maximization
Aiyen
Jul 23, 2022, 8:34 AM
14
points
1
comment
6
min read
LW
link
Which values are stable under ontology shifts?
Richard_Ngo
Jul 23, 2022, 2:40 AM
75
points
48
comments
3
min read
LW
link
(thinkingcomplete.blogspot.com)
Trying out Prompt Engineering on TruthfulQA
Megan Kinniment
Jul 23, 2022, 2:04 AM
10
points
0
comments
8
min read
LW
link
Connor Leahy on Dying with Dignity, EleutherAI and Conjecture
Michaël Trazzi
Jul 22, 2022, 6:44 PM
195
points
29
comments
14
min read
LW
link
(theinsideview.ai)
Wyclif’s Dust: the missing chapter
David Hugh-Jones
Jul 22, 2022, 6:27 PM
9
points
0
comments
4
min read
LW
link
(wyclif.substack.com)
Making DALL-E Count
DirectedEvolution
Jul 22, 2022, 9:11 AM
23
points
12
comments
4
min read
LW
link
One-day applied rationality workshop in Berlin Aug 29 (after LWCW)
Duncan Sabien (Deactivated)
Jul 22, 2022, 7:58 AM
30
points
5
comments
2
min read
LW
link
Internal Double Crux
CFAR!Duncan
Jul 22, 2022, 4:34 AM
93
points
15
comments
12
min read
LW
link
Conditioning Generative Models with Restrictions
Adam Jermyn
Jul 21, 2022, 8:33 PM
18
points
4
comments
8
min read
LW
link
Our Existing Solutions to AGI Alignment (semi-safe)
Michael Soareverix
Jul 21, 2022, 7:00 PM
12
points
1
comment
3
min read
LW
link
Changing the world through slack & hobbies
Steven Byrnes
Jul 21, 2022, 6:11 PM
261
points
13
comments
10
min read
LW
link
Which personalities do we find intolerable?
weathersystems
Jul 21, 2022, 3:56 PM
10
points
3
comments
6
min read
LW
link
YouTubeTV and Spoilers
Zvi
Jul 21, 2022, 1:50 PM
16
points
6
comments
8
min read
LW
link
(thezvi.wordpress.com)
Covid 7/21/22: Featuring ASPR
Zvi
Jul 21, 2022, 1:50 PM
27
points
0
comments
14
min read
LW
link
(thezvi.wordpress.com)
[Question]
How much to optimize for the short-timelines scenario?
SoerenMind
Jul 21, 2022, 10:47 AM
20
points
3
comments
1
min read
LW
link
Is Gas Green?
ChristianKl
Jul 21, 2022, 10:30 AM
19
points
19
comments
1
min read
LW
link
Why are politicians polarized?
ErnestScribbler
Jul 21, 2022, 8:17 AM
15
points
24
comments
7
min read
LW
link
[AN #173] Recent language model results from DeepMind
Rohin Shah
Jul 21, 2022, 2:30 AM
37
points
9
comments
8
min read
LW
link
(mailchi.mp)
Don’t take the organizational chart literally
lc
Jul 21, 2022, 12:56 AM
54
points
21
comments
4
min read
LW
link
Personal forecasting retrospective: 2020-2022
elifland
Jul 21, 2022, 12:07 AM
38
points
4
comments
8
min read
LW
link
(www.foxy-scout.com)
Defining Optimization in a Deeper Way Part 3
J Bostock
Jul 20, 2022, 10:06 PM
8
points
0
comments
2
min read
LW
link
Cognitive Risks of Adolescent Binge Drinking
Elizabeth
and
Martin Bernstorff
Jul 20, 2022, 9:10 PM
70
points
12
comments
10
min read
LW
link
(acesounderglass.com)
Why AGI Timeline Research/Discourse Might Be Overrated
Noosphere89
Jul 20, 2022, 8:26 PM
5
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
Enlightenment Values in a Vulnerable World
Maxwell Tabarrok
Jul 20, 2022, 7:52 PM
15
points
6
comments
31
min read
LW
link
(maximumprogress.substack.com)
Countering arguments against working on AI safety
Rauno Arike
Jul 20, 2022, 6:23 PM
7
points
2
comments
7
min read
LW
link
A Short Intro to Humans
Ben Amitay
Jul 20, 2022, 3:28 PM
1
point
1
comment
7
min read
LW
link
How to Diversify Conceptual Alignment: the Model Behind Refine
adamShimi
Jul 20, 2022, 10:44 AM
87
points
11
comments
8
min read
LW
link
[Question]
What are the simplest questions in applied rationality where you don’t know the answer to?
ChristianKl
Jul 20, 2022, 9:53 AM
26
points
11
comments
1
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel