Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
2
A market is a neural network
David Hugh-Jones
Sep 15, 2022, 9:53 PM
7
points
4
comments
8
min read
LW
link
Understanding Conjecture: Notes from Connor Leahy interview
Orpheus16
Sep 15, 2022, 6:37 PM
107
points
23
comments
15
min read
LW
link
How should DeepMind’s Chinchilla revise our AI forecasts?
Cleo Nardo
Sep 15, 2022, 5:54 PM
35
points
12
comments
13
min read
LW
link
Rational Animations’ Script Writing Contest
Writer
Sep 15, 2022, 4:56 PM
23
points
1
comment
3
min read
LW
link
Covid 9/15/22: Permanent Normal
Zvi
Sep 15, 2022, 4:00 PM
32
points
9
comments
20
min read
LW
link
(thezvi.wordpress.com)
[Question]
Are Human Brains Universal?
DragonGod
Sep 15, 2022, 3:15 PM
16
points
28
comments
5
min read
LW
link
Intelligence failures and a theory of change for forecasting
NathanBarnard
Sep 15, 2022, 3:02 PM
5
points
0
comments
10
min read
LW
link
Why deceptive alignment matters for AGI safety
Marius Hobbhahn
Sep 15, 2022, 1:38 PM
68
points
13
comments
13
min read
LW
link
FDT defects in a realistic Twin Prisoners’ Dilemma
SMK
Sep 15, 2022, 8:55 AM
38
points
1
comment
26
min read
LW
link
[Question]
What’s the longest a sentient observer could survive in the Dark Era?
Raemon
Sep 15, 2022, 8:43 AM
33
points
15
comments
1
min read
LW
link
The Value of Not Being an Imposter
sudo
Sep 15, 2022, 8:32 AM
5
points
0
comments
1
min read
LW
link
Capability and Agency as Cornerstones of AI risk — My current model
wilm
Sep 15, 2022, 8:25 AM
10
points
4
comments
12
min read
LW
link
General advice for transitioning into Theoretical AI Safety
Martín Soto
Sep 15, 2022, 5:23 AM
12
points
0
comments
10
min read
LW
link
Sequencing Intro II: Adapters
jefftk
Sep 15, 2022, 3:30 AM
12
points
0
comments
2
min read
LW
link
(www.jefftk.com)
[Question]
How do I find tutors for obscure skills/subjects (i.e. fermi estimation tutors)
joraine
Sep 15, 2022, 1:15 AM
11
points
2
comments
1
min read
LW
link
[Question]
Forecasting thread: How does AI risk level vary based on timelines?
elifland
Sep 14, 2022, 11:56 PM
34
points
7
comments
1
min read
LW
link
Coordinate-Free Interpretability Theory
johnswentworth
Sep 14, 2022, 11:33 PM
52
points
16
comments
5
min read
LW
link
Progress links and tweets, 2022-09-14
jasoncrawford
Sep 14, 2022, 11:21 PM
9
points
2
comments
1
min read
LW
link
(rootsofprogress.org)
Effective altruism in the garden of ends
Tyler Alterman
Sep 14, 2022, 10:02 PM
24
points
1
comment
27
min read
LW
link
The problem with the media presentation of “believing in AI”
Roman Leventov
Sep 14, 2022, 9:05 PM
3
points
0
comments
1
min read
LW
link
Seeing the Schema
vitaliya
Sep 14, 2022, 8:45 PM
23
points
6
comments
1
min read
LW
link
Responding to ‘Beyond Hyperanthropomorphism’
ukc10014
Sep 14, 2022, 8:37 PM
9
points
0
comments
16
min read
LW
link
When is intent alignment sufficient or necessary to reduce AGI conflict?
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
Sep 14, 2022, 7:39 PM
40
points
0
comments
9
min read
LW
link
When would AGIs engage in conflict?
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
Sep 14, 2022, 7:38 PM
52
points
5
comments
13
min read
LW
link
When does technical work to reduce AGI conflict make a difference?: Introduction
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
Sep 14, 2022, 7:38 PM
52
points
3
comments
6
min read
LW
link
ACT-1: Transformer for Actions
Daniel Kokotajlo
Sep 14, 2022, 7:09 PM
52
points
4
comments
1
min read
LW
link
(www.adept.ai)
Renormalization: Why Bigger is Simpler
tailcalled
Sep 14, 2022, 5:52 PM
30
points
5
comments
1
min read
LW
link
(www.youtube.com)
Guesstimate Algorithm for Medical Research
Elizabeth
Sep 14, 2022, 5:30 PM
26
points
0
comments
7
min read
LW
link
(acesounderglass.com)
Precise P(doom) isn’t very important for prioritization or strategy
harsimony
Sep 14, 2022, 5:19 PM
14
points
6
comments
1
min read
LW
link
Transhumanism, genetic engineering, and the biological basis of intelligence.
fowlertm
Sep 14, 2022, 3:55 PM
41
points
23
comments
1
min read
LW
link
What would happen if we abolished the FDA tomorrow?
Yair Halberstadt
Sep 14, 2022, 3:22 PM
19
points
15
comments
4
min read
LW
link
Emily Brontë on: Psychology Required for Serious™ AGI Safety Research
robertzk
Sep 14, 2022, 2:47 PM
2
points
0
comments
1
min read
LW
link
The Defender’s Advantage of Interpretability
Marius Hobbhahn
Sep 14, 2022, 2:05 PM
41
points
4
comments
6
min read
LW
link
[Question]
Why Do People Think Humans Are Stupid?
DragonGod
Sep 14, 2022, 1:55 PM
22
points
41
comments
3
min read
LW
link
[Question]
Are Speed Superintelligences Feasible for Modern ML Techniques?
DragonGod
Sep 14, 2022, 12:59 PM
9
points
7
comments
1
min read
LW
link
[Question]
Would a Misaligned SSI Really Kill Us All?
DragonGod
Sep 14, 2022, 12:15 PM
6
points
7
comments
6
min read
LW
link
Some ideas for epistles to the AI ethicists
Charlie Steiner
Sep 14, 2022, 9:07 AM
19
points
0
comments
4
min read
LW
link
Git Re-Basin: Merging Models modulo Permutation Symmetries [Linkpost]
aog
Sep 14, 2022, 8:55 AM
21
points
0
comments
2
min read
LW
link
(arxiv.org)
Dan Luu on Futurist Predictions
RobertM
Sep 14, 2022, 3:01 AM
50
points
9
comments
5
min read
LW
link
(danluu.com)
Simple 5x5 Go
jefftk
Sep 14, 2022, 2:00 AM
18
points
3
comments
1
min read
LW
link
(www.jefftk.com)
I’m taking a course on game theory and am faced with this question. What’s the rational decision?
Dalton Mabery
Sep 14, 2022, 12:27 AM
0
points
12
comments
1
min read
LW
link
Twin Cities ACX Meetup—Oct 2022
Timothy M.
Sep 13, 2022, 10:38 PM
1
point
2
comments
1
min read
LW
link
Trying to find the underlying structure of computational systems
Matthias G. Mayer
Sep 13, 2022, 9:16 PM
18
points
9
comments
4
min read
LW
link
Risk aversion and GPT-3
casualphysicsenjoyer
Sep 13, 2022, 8:50 PM
1
point
0
comments
1
min read
LW
link
Simple proofs of the age of the universe (or other things)
Astynax
Sep 13, 2022, 6:20 PM
16
points
12
comments
1
min read
LW
link
New tool for exploring EA Forum, LessWrong and Alignment Forum—Tree of Tags
Filip Sondej
Sep 13, 2022, 5:33 PM
31
points
2
comments
1
min read
LW
link
An investigation into when agents may be incentivized to manipulate our beliefs.
Felix Hofstätter
Sep 13, 2022, 5:08 PM
15
points
0
comments
14
min read
LW
link
Deep Q-Networks Explained
Jay Bailey
Sep 13, 2022, 12:01 PM
58
points
8
comments
20
min read
LW
link
Ideas of the Gaps
Q Home
Sep 13, 2022, 10:55 AM
4
points
3
comments
12
min read
LW
link
[Question]
Which LessWrong content would you like recorded into audio/podcast form?
Ruby
Sep 13, 2022, 1:20 AM
29
points
11
comments
1
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel