Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
2
Less Wrong Community Weekend 2022
UnplannedCauliflower
May 1, 2022, 11:59 AM
34
points
11
comments
2
min read
LW
link
[Question]
How confident are we that there are no Extremely Obvious Aliens?
Logan Zoellner
May 1, 2022, 10:59 AM
60
points
25
comments
1
min read
LW
link
How to be skeptical about meditation/Buddhism
Viliam
May 1, 2022, 10:30 AM
78
points
47
comments
2
min read
LW
link
My Approach to Non-Literal Communication
Isaac King
May 1, 2022, 2:47 AM
29
points
8
comments
14
min read
LW
link
Narrative Syncing
AnnaSalamon
May 1, 2022, 1:48 AM
118
points
48
comments
7
min read
LW
link
1
review
What is the solution to the Alignment problem?
Algon
Apr 30, 2022, 11:19 PM
24
points
2
comments
1
min read
LW
link
[Question]
Why hasn’t deep learning generated significant economic value yet?
Alex_Altair
Apr 30, 2022, 8:27 PM
114
points
89
comments
2
min read
LW
link
Nuclear Energy—Good but not the silver bullet we were hoping for
Marius Hobbhahn
Apr 30, 2022, 3:41 PM
64
points
33
comments
15
min read
LW
link
1
review
Quick Thoughts on A.I. Governance
Nicholas / Heather Kross
Apr 30, 2022, 2:49 PM
70
points
8
comments
2
min read
LW
link
(www.thinkingmuchbetter.com)
Discussion on Thomas Philippon’s paper on TFP growth being linear
Arjun Yadav
Apr 30, 2022, 2:25 PM
2
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
[untitled post]
superads91
Apr 30, 2022, 1:01 PM
0
points
49
comments
1
min read
LW
link
Note-Taking without Hidden Messages
Hoagy
Apr 30, 2022, 11:15 AM
17
points
2
comments
4
min read
LW
link
[Question]
How good is spending?
tryactions
Apr 30, 2022, 7:27 AM
5
points
11
comments
1
min read
LW
link
[Linkpost] New multi-modal Deepmind model fusing Chinchilla with images and videos
p.b.
Apr 30, 2022, 3:47 AM
53
points
18
comments
1
min read
LW
link
Salvage Epistemology
jimrandomh
Apr 30, 2022, 2:10 AM
101
points
119
comments
1
min read
LW
link
Learning the smooth prior
Geoffrey Irving
,
Rohin Shah
and
evhub
Apr 29, 2022, 9:10 PM
35
points
0
comments
12
min read
LW
link
[Question]
Do FDT (or similar) recommend reparations?
David Scott Krueger (formerly: capybaralet)
Apr 29, 2022, 5:34 PM
13
points
3
comments
1
min read
LW
link
Saying no to the Appleman
Johannes C. Mayer
Apr 29, 2022, 10:39 AM
47
points
12
comments
3
min read
LW
link
Prize for Alignment Research Tasks
stuhlmueller
and
William_S
Apr 29, 2022, 8:57 AM
64
points
38
comments
10
min read
LW
link
Increasing Demandingness in EA
jefftk
Apr 29, 2022, 1:20 AM
61
points
22
comments
3
min read
LW
link
(www.jefftk.com)
[Question]
What is a training “step” vs. “episode” in machine learning?
Evan R. Murphy
Apr 28, 2022, 9:53 PM
10
points
4
comments
1
min read
LW
link
Facts Matter
mrdlm
Apr 28, 2022, 9:19 PM
20
points
2
comments
3
min read
LW
link
[Question]
Is alignment possible?
Shay
Apr 28, 2022, 9:18 PM
0
points
5
comments
1
min read
LW
link
Two Prosocial Rejection Norms
Emrik
Apr 28, 2022, 8:53 PM
54
points
21
comments
3
min read
LW
link
Dath Ilan vs. Sid Meier’s Alpha Centauri: Pareto Improvements
David Udell
Apr 28, 2022, 7:26 PM
35
points
16
comments
2
min read
LW
link
A Parable Of Explainability
George3d6
Apr 28, 2022, 4:46 PM
10
points
5
comments
5
min read
LW
link
(www.epistem.ink)
Keep your protos in one repo
RobertM
Apr 28, 2022, 3:53 PM
5
points
4
comments
5
min read
LW
link
(docs.protocall.dev)
Covid 4/28/22: Take My Paxlovid, Please
Zvi
Apr 28, 2022, 3:20 PM
35
points
14
comments
8
min read
LW
link
(thezvi.wordpress.com)
3-bit filters
iivonen
Apr 28, 2022, 11:55 AM
8
points
0
comments
2
min read
LW
link
Jaan Tallinn’s 2021 Philanthropy Overview
jaan
Apr 28, 2022, 9:55 AM
71
points
2
comments
1
min read
LW
link
(jaan.online)
Doom sooner
Flaglandbase
Apr 28, 2022, 7:24 AM
1
point
0
comments
3
min read
LW
link
How Might an Alignment Attractor Look like?
Shmi
Apr 28, 2022, 6:46 AM
47
points
15
comments
2
min read
LW
link
Virtue signaling is sometimes the best or the only metric we have
Holly_Elmore
Apr 28, 2022, 4:52 AM
41
points
43
comments
5
min read
LW
link
The Gospel of Martin Luther
lsusr
Apr 28, 2022, 4:29 AM
9
points
2
comments
1
min read
LW
link
Letter to my Squire
lsusr
Apr 28, 2022, 4:16 AM
9
points
0
comments
1
min read
LW
link
Slides: Potential Risks From Advanced AI
Aryeh Englander
Apr 28, 2022, 2:15 AM
7
points
0
comments
1
min read
LW
link
Naive comments on AGIlignment
Ericf
Apr 28, 2022, 1:08 AM
−8
points
4
comments
1
min read
LW
link
AI Alternative Futures: Scenario Mapping Artificial Intelligence Risk—Request for Participation (*Closed*)
Kakili
27 Apr 2022 22:07 UTC
10
points
2
comments
8
min read
LW
link
The Speed + Simplicity Prior is probably anti-deceptive
Yonadav Shavit
27 Apr 2022 19:30 UTC
30
points
28
comments
12
min read
LW
link
If you’re very optimistic about ELK then you should be optimistic about outer alignment
Sam Marks
27 Apr 2022 19:30 UTC
17
points
8
comments
3
min read
LW
link
The Game of Masks
Slimepriestess
27 Apr 2022 18:03 UTC
50
points
18
comments
11
min read
LW
link
(hivewired.wordpress.com)
Law-Following AI 3: Lawless AI Agents Undermine Stabilizing Agreements
Cullen
27 Apr 2022 17:30 UTC
2
points
2
comments
3
min read
LW
link
Law-Following AI 2: Intent Alignment + Superintelligence → Lawless AI (By Default)
Cullen
27 Apr 2022 17:27 UTC
5
points
2
comments
6
min read
LW
link
Law-Following AI 1: Sequence Introduction and Structure
Cullen
27 Apr 2022 17:26 UTC
18
points
10
comments
9
min read
LW
link
[Intro to brain-like-AGI safety] 13. Symbol grounding & human social instincts
Steven Byrnes
27 Apr 2022 13:30 UTC
73
points
15
comments
15
min read
LW
link
The case for turning glowfic into Sequences
Thomas Kwa
27 Apr 2022 6:58 UTC
87
points
29
comments
5
min read
LW
link
[Link] Evidence of Fabricated Data in a Vitamin C trial by Paul E Marik et al in CHEST
Kenny
27 Apr 2022 6:48 UTC
6
points
1
comment
1
min read
LW
link
SERI ML Alignment Theory Scholars Program 2022
Ryan Kidd
,
Victor Warlop
and
ozhang
27 Apr 2022 0:43 UTC
67
points
6
comments
3
min read
LW
link
EU Maximizing in a Gloomy World
David Udell
27 Apr 2022 0:28 UTC
6
points
2
comments
1
min read
LW
link
Why Copilot Accelerates Timelines
Michaël Trazzi
26 Apr 2022 22:06 UTC
35
points
14
comments
7
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel