How To Get Into In­de­pen­dent Re­search On Align­ment/​Agency

johnswentworth19 Nov 2021 0:00 UTC
354 points
38 comments13 min readLW link2 reviews

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

11 Nov 2021 3:01 UTC
328 points
251 comments34 min readLW link1 review

Frame Control

Aella27 Nov 2021 22:59 UTC
326 points
283 comments23 min readLW link2 reviews

Fea­ture Selection

Zack_M_Davis1 Nov 2021 0:22 UTC
320 points
24 comments16 min readLW link1 review

Effi­cien­tZero: How It Works

1a3orn26 Nov 2021 15:17 UTC
297 points
50 comments29 min readLW link1 review

Study Guide

johnswentworth6 Nov 2021 1:23 UTC
288 points
48 comments16 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

Vitor11 Nov 2021 15:58 UTC
262 points
22 comments11 min readLW link1 review

larger lan­guage mod­els may dis­ap­point you [or, an eter­nally un­finished draft]

nostalgebraist26 Nov 2021 23:08 UTC
260 points
31 comments31 min readLW link2 reviews

Omicron Var­i­ant Post #1: We’re F***ed, It’s Never Over

Zvi26 Nov 2021 19:00 UTC
260 points
95 comments18 min readLW link
(thezvi.wordpress.com)

Ngo and Yud­kowsky on al­ign­ment difficulty

15 Nov 2021 20:31 UTC
253 points
151 comments99 min readLW link1 review

Con­cen­tra­tion of Force

Duncan Sabien (Deactivated)6 Nov 2021 8:20 UTC
240 points
23 comments12 min readLW link1 review

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer Yudkowsky22 Nov 2021 19:35 UTC
205 points
176 comments60 min readLW link1 review

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

Zvi15 Nov 2021 3:50 UTC
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

Al­most ev­ery­one should be less afraid of lawsuits

alyssavance27 Nov 2021 2:06 UTC
197 points
18 comments5 min readLW link2 reviews

Speak­ing of Stag Hunts

Duncan Sabien (Deactivated)6 Nov 2021 8:20 UTC
191 points
373 comments18 min readLW link

The Ra­tion­al­ists of the 1950s (and be­fore) also called them­selves “Ra­tion­al­ists”

Owain_Evans28 Nov 2021 20:17 UTC
188 points
32 comments3 min readLW link1 review

Split and Commit

Duncan Sabien (Deactivated)21 Nov 2021 6:27 UTC
184 points
34 comments7 min readLW link1 review

Preprint is out! 100,000 lu­mens to treat sea­sonal af­fec­tive disorder

Fabienne12 Nov 2021 17:59 UTC
169 points
10 comments1 min readLW link

You are prob­a­bly un­der­es­ti­mat­ing how good self-love can be

Charlie Rogers-Smith14 Nov 2021 0:41 UTC
167 points
19 comments12 min readLW link1 review

The bonds of fam­ily and com­mu­nity: Poverty and cru­elty among Rus­sian peas­ants in the late 19th century

jasoncrawford28 Nov 2021 17:22 UTC
151 points
36 comments15 min readLW link1 review
(rootsofprogress.org)

App and book recom­men­da­tions for peo­ple who want to be hap­pier and more productive

KatWoods6 Nov 2021 17:40 UTC
141 points
43 comments8 min readLW link

Com­ments on Car­l­smith’s “Is power-seek­ing AI an ex­is­ten­tial risk?”

So8res13 Nov 2021 4:29 UTC
138 points
15 comments40 min readLW link1 review

Effi­cien­tZero: hu­man ALE sam­ple-effi­ciency w/​MuZero+self-supervised

gwern2 Nov 2021 2:32 UTC
137 points
52 comments1 min readLW link
(arxiv.org)

How do we be­come con­fi­dent in the safety of a ma­chine learn­ing sys­tem?

evhub8 Nov 2021 22:49 UTC
133 points
5 comments31 min readLW link

Sci-Hub sued in India

Connor_Flexman13 Nov 2021 23:12 UTC
131 points
19 comments7 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
130 points
61 comments39 min readLW link1 review

Tran­script: “You Should Read HPMOR”

TurnTrout2 Nov 2021 18:20 UTC
124 points
12 comments5 min readLW link1 review

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

29 Nov 2021 19:26 UTC
121 points
39 comments40 min readLW link1 review

Omicron Var­i­ant Post #2

Zvi29 Nov 2021 16:30 UTC
120 points
34 comments14 min readLW link
(thezvi.wordpress.com)

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

25 Nov 2021 16:45 UTC
119 points
95 comments68 min readLW link

Why I’m ex­cited about Red­wood Re­search’s cur­rent project

paulfchristiano12 Nov 2021 19:26 UTC
114 points
6 comments7 min readLW link

Where did the 5 micron num­ber come from? Nowhere good. [Wired.com]

Elizabeth9 Nov 2021 7:14 UTC
108 points
8 comments1 min readLW link1 review
(www.wired.com)

Effec­tive Evil

lsusr2 Nov 2021 0:26 UTC
104 points
7 comments3 min readLW link

Money Stuff

Jacob Falkovich1 Nov 2021 16:08 UTC
103 points
18 comments7 min readLW link

Rapid In­crease of Highly Mu­tated B.1.1.529 Strain in South Africa

dawangy26 Nov 2021 1:05 UTC
103 points
15 comments1 min readLW link

The Maker of MIND

Tomás B.20 Nov 2021 16:28 UTC
102 points
19 comments11 min readLW link

Im­prov­ing on the Karma System

Raelifin14 Nov 2021 18:01 UTC
98 points
35 comments19 min readLW link

Ap­ply to the ML for Align­ment Boot­camp (MLAB) in Berkeley [Jan 3 - Jan 22]

3 Nov 2021 18:22 UTC
95 points
4 comments1 min readLW link

[Book Re­view] “The Bell Curve” by Charles Murray

lsusr2 Nov 2021 5:49 UTC
94 points
134 comments23 min readLW link

[Book Re­view] “Sorceror’s Ap­pren­tice” by Tahir Shah

lsusr20 Nov 2021 11:29 UTC
91 points
11 comments7 min readLW link

Com­ments on OpenPhil’s In­ter­pretabil­ity RFP

paulfchristiano5 Nov 2021 22:36 UTC
91 points
5 comments7 min readLW link

AI Safety Needs Great Engineers

Andy Jones23 Nov 2021 15:40 UTC
89 points
43 comments4 min readLW link

Co­or­di­na­tion Skills I Wish I Had For the Pandemic

Raemon13 Nov 2021 23:32 UTC
89 points
9 comments6 min readLW link1 review

A Bayesian Ag­gre­ga­tion Paradox

Jsevillamol22 Nov 2021 10:39 UTC
87 points
23 comments7 min readLW link

Satis­ficers Tend To Seek Power: In­stru­men­tal Con­ver­gence Via Retargetability

TurnTrout18 Nov 2021 1:54 UTC
85 points
8 comments17 min readLW link
(www.overleaf.com)

Tran­script for Ge­off An­ders and Anna Sala­mon’s Oct. 23 conversation

Rob Bensinger8 Nov 2021 2:19 UTC
83 points
97 comments58 min readLW link

A pos­i­tive case for how we might suc­ceed at pro­saic AI alignment

evhub16 Nov 2021 1:49 UTC
81 points
46 comments6 min readLW link

What would we do if al­ign­ment were fu­tile?

Grant Demaree14 Nov 2021 8:09 UTC
75 points
39 comments3 min readLW link

Covid 11/​25: Another Thanksgiving

Zvi25 Nov 2021 13:40 UTC
73 points
9 comments21 min readLW link
(thezvi.wordpress.com)

[Question] Worst Com­mon­sense Con­cepts?

abramdemski15 Nov 2021 18:22 UTC
73 points
34 comments3 min readLW link