Luck based medicine: my re­sent­ful story of be­com­ing a med­i­cal miracle

ElizabethOct 16, 2022, 5:40 PM
488 points
121 comments12 min readLW link3 reviews
(acesounderglass.com)

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGraceOct 14, 2022, 1:00 PM
371 points
124 comments34 min readLW link1 review
(aiimpacts.org)

So, geez there’s a lot of AI con­tent these days

RaemonOct 6, 2022, 9:32 PM
258 points
140 comments6 min readLW link

In­tro­duc­tion to ab­stract entropy

Alex_AltairOct 20, 2022, 9:03 PM
237 points
78 comments18 min readLW link1 review

Les­sons learned from talk­ing to >100 aca­demics about AI safety

Marius HobbhahnOct 10, 2022, 1:16 PM
216 points
18 comments12 min readLW link1 review

What does it take to defend the world against out-of-con­trol AGIs?

Steven ByrnesOct 25, 2022, 2:47 PM
208 points
49 comments30 min readLW link1 review

De­ci­sion the­ory does not im­ply that we get to have nice things

So8resOct 18, 2022, 3:04 AM
171 points
73 comments26 min readLW link2 reviews

Six (and a half) in­tu­itions for KL divergence

CallumMcDougallOct 12, 2022, 9:07 PM
168 points
27 comments10 min readLW link1 review
(www.perfectlynormal.co.uk)

The So­cial Re­ces­sion: By the Numbers

antonomonOct 29, 2022, 6:45 PM
165 points
29 comments8 min readLW link
(novum.substack.com)

Why I think there’s a one-in-six chance of an im­mi­nent global nu­clear war

Max TegmarkOct 8, 2022, 6:26 AM
164 points
169 comments4 min readLW link

Age changes what you care about

DentinOct 16, 2022, 3:36 PM
141 points
37 comments2 min readLW link

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannellOct 6, 2022, 12:21 AM
138 points
33 comments6 min readLW link

Ap­ply to the Red­wood Re­search Mechanis­tic In­ter­pretabil­ity Ex­per­i­ment (REMIX), a re­search pro­gram in Berkeley

Oct 27, 2022, 1:32 AM
135 points
14 comments12 min readLW link

Don’t leave your finger­prints on the future

So8resOct 8, 2022, 12:35 AM
131 points
48 comments5 min readLW link

Nice­ness is unnatural

So8resOct 13, 2022, 1:30 AM
130 points
20 comments8 min readLW link1 review

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8resOct 6, 2022, 5:15 AM
126 points
42 comments2 min readLW link

Mnestics

Jarred FilmerOct 23, 2022, 12:30 AM
120 points
6 comments4 min readLW link

Am I se­cretly ex­cited for AI get­ting weird?

porbyOct 29, 2022, 10:16 PM
116 points
4 comments4 min readLW link

Why Weren’t Hot Air Bal­loons In­vented Sooner?

Lost FuturesOct 18, 2022, 12:41 AM
115 points
52 comments6 min readLW link
(lostfutures.substack.com)

Ac­tu­ally, All Nu­clear Famine Papers are Bunk

Lao MeinOct 12, 2022, 5:58 AM
113 points
37 comments2 min readLW link1 review

That one apoc­a­lyp­tic nu­clear famine pa­per is bunk

Lao MeinOct 12, 2022, 3:33 AM
110 points
10 comments1 min readLW link

Plans Are Pre­dic­tions, Not Op­ti­miza­tion Targets

johnswentworthOct 20, 2022, 9:17 PM
108 points
20 comments4 min readLW link1 review

Con­sider your ap­petite for disagreements

Adam ZernerOct 8, 2022, 11:25 PM
107 points
18 comments6 min readLW link1 review

Con­tra shard the­ory, in the con­text of the di­a­mond max­i­mizer problem

So8resOct 13, 2022, 11:51 PM
105 points
19 comments2 min readLW link1 review

Scal­ing Laws for Re­ward Model Overoptimization

Oct 20, 2022, 12:20 AM
103 points
13 comments1 min readLW link
(arxiv.org)

Anal­y­sis: US re­stricts GPU sales to China

aogOct 7, 2022, 6:38 PM
102 points
58 comments5 min readLW link

Align­ment 201 curriculum

Richard_NgoOct 12, 2022, 6:03 PM
102 points
3 comments1 min readLW link
(www.agisafetyfundamentals.com)

The Teacup Test

lsusrOct 8, 2022, 4:25 AM
102 points
32 comments2 min readLW link

Some Les­sons Learned from Study­ing Indi­rect Ob­ject Iden­ti­fi­ca­tion in GPT-2 small

Oct 28, 2022, 11:55 PM
101 points
9 comments9 min readLW link2 reviews
(arxiv.org)

How To Make Pre­dic­tion Mar­kets Use­ful For Align­ment Work

johnswentworthOct 18, 2022, 7:01 PM
97 points
18 comments2 min readLW link

A shot at the di­a­mond-al­ign­ment problem

TurnTroutOct 6, 2022, 6:29 PM
95 points
67 comments15 min readLW link

Trans­for­ma­tive VR Is Likely Com­ing Soon

jimrandomhOct 13, 2022, 6:25 AM
92 points
46 comments2 min readLW link

«Boundaries», Part 3a: Defin­ing bound­aries as di­rected Markov blankets

Andrew_CritchOct 30, 2022, 6:31 AM
90 points
20 comments15 min readLW link

A blog post is a very long and com­plex search query to find fas­ci­nat­ing peo­ple and make them route in­ter­est­ing stuff to your inbox

Henrik KarlssonOct 5, 2022, 7:07 PM
89 points
12 comments11 min readLW link
(escapingflatland.substack.com)

Why Balsa Re­search is Worthwhile

ZviOct 10, 2022, 1:50 PM
87 points
12 comments8 min readLW link
(thezvi.wordpress.com)

Poly­se­man­tic­ity and Ca­pac­ity in Neu­ral Networks

Oct 7, 2022, 5:51 PM
87 points
14 comments3 min readLW link

I learn bet­ter when I frame learn­ing as Vengeance for losses in­curred through ig­no­rance, and you might too

chaosmageOct 15, 2022, 12:41 PM
84 points
9 comments3 min readLW link1 review

Paper: Dis­cov­er­ing novel al­gorithms with AlphaTen­sor [Deep­mind]

LawrenceCOct 5, 2022, 4:20 PM
82 points
18 comments1 min readLW link
(www.deepmind.com)

The her­i­ta­bil­ity of hu­man val­ues: A be­hav­ior ge­netic cri­tique of Shard Theory

geoffreymillerOct 20, 2022, 3:51 PM
82 points
63 comments21 min readLW link

Un­tapped Po­ten­tial at 13-18

belkarxOct 18, 2022, 6:09 PM
82 points
53 comments1 min readLW link

More Re­cent Progress in the The­ory of Neu­ral Networks

jylin04Oct 6, 2022, 4:57 PM
82 points
6 comments4 min readLW link

“Nor­mal” is the equil­ibrium state of past op­ti­miza­tion processes

Alex_AltairOct 30, 2022, 7:03 PM
82 points
5 comments5 min readLW link

Vot­ing The­ory Introduction

Scott GarrabrantOct 17, 2022, 8:48 AM
80 points
8 comments6 min readLW link

The “you-can-just” alarm

EmrikOct 8, 2022, 10:43 AM
77 points
3 comments1 min readLW link

Max­i­mal Lotteries

Scott GarrabrantOct 17, 2022, 8:54 AM
77 points
11 comments7 min readLW link

Re­sponse to Katja Grace’s AI x-risk counterarguments

Oct 19, 2022, 1:17 AM
77 points
18 comments15 min readLW link

Neu­ral Tan­gent Ker­nel Distillation

Oct 5, 2022, 6:11 PM
76 points
20 comments8 min readLW link

Open Prob­lem in Vot­ing Theory

Scott GarrabrantOct 17, 2022, 8:42 PM
75 points
16 comments6 min readLW link

What does it mean for an AGI to be ‘safe’?

So8resOct 7, 2022, 4:13 AM
74 points
29 comments3 min readLW link

Wis­dom Can­not Be Unzipped

SableOct 22, 2022, 12:28 AM
74 points
17 comments7 min readLW link1 review
(affablyevil.substack.com)