Stream­ing Science on Twitch

A Ray15 Nov 2021 22:24 UTC
21 points
1 comment3 min readLW link

Ngo and Yud­kowsky on al­ign­ment difficulty

15 Nov 2021 20:31 UTC
253 points
151 comments99 min readLW link1 review

Dan Luu on Per­sis­tent Bad De­ci­sion Mak­ing (but maybe it’s no­ble?)

Elizabeth15 Nov 2021 20:05 UTC
17 points
3 comments1 min readLW link
(danluu.com)

The po­etry of progress

jasoncrawford15 Nov 2021 19:24 UTC
51 points
6 comments4 min readLW link
(rootsofprogress.org)

[Question] Worst Com­mon­sense Con­cepts?

abramdemski15 Nov 2021 18:22 UTC
73 points
34 comments3 min readLW link

My un­der­stand­ing of the al­ign­ment problem

danieldewey15 Nov 2021 18:13 UTC
43 points
3 comments3 min readLW link

“Sum­ma­riz­ing Books with Hu­man Feed­back” (re­cur­sive GPT-3)

gwern15 Nov 2021 17:41 UTC
24 points
4 comments1 min readLW link
(openai.com)

How Hu­man­ity Lost Con­trol and Hu­mans Lost Liberty: From Our Brave New World to Analo­gia (Se­quence In­tro­duc­tion)

Justin Bullock15 Nov 2021 14:22 UTC
8 points
4 comments3 min readLW link

Re: At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

lsusr15 Nov 2021 10:02 UTC
20 points
8 comments15 min readLW link

What the fu­ture will look like

avantika.mehra15 Nov 2021 5:14 UTC
7 points
1 comment3 min readLW link

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

Zvi15 Nov 2021 3:50 UTC
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

An Emer­gency Fund for Effec­tive Altru­ists (sec­ond ver­sion)

bice14 Nov 2021 18:28 UTC
12 points
4 comments2 min readLW link

Tele­vised sports ex­ist to gam­ble with testos­terone lev­els us­ing pre­dic­tion skill

Lucent14 Nov 2021 18:24 UTC
22 points
3 comments1 min readLW link

Im­prov­ing on the Karma System

Raelifin14 Nov 2021 18:01 UTC
105 points
36 comments19 min readLW link

[Linkpost] Paul Gra­ham 101

Gunnar_Zarncke14 Nov 2021 16:52 UTC
12 points
4 comments1 min readLW link

My cur­rent un­cer­tain­ties re­gard­ing AI, al­ign­ment, and the end of the world

dominicq14 Nov 2021 14:08 UTC
2 points
3 comments2 min readLW link

Ed­u­ca­tion on My Homeworld

lsusr14 Nov 2021 10:16 UTC
37 points
19 comments5 min readLW link

What would we do if al­ign­ment were fu­tile?

Grant Demaree14 Nov 2021 8:09 UTC
75 points
39 comments3 min readLW link

A phar­ma­ceu­ti­cal stock pric­ing mystery

DirectedEvolution14 Nov 2021 1:19 UTC
14 points
2 comments3 min readLW link

You are prob­a­bly un­der­es­ti­mat­ing how good self-love can be

Charlie Rogers-Smith14 Nov 2021 0:41 UTC
167 points
19 comments12 min readLW link1 review

Co­or­di­na­tion Skills I Wish I Had For the Pandemic

Raemon13 Nov 2021 23:32 UTC
89 points
9 comments6 min readLW link1 review

Sci-Hub sued in India

Connor_Flexman13 Nov 2021 23:12 UTC
131 points
19 comments7 min readLW link

[Question] What’s the like­li­hood of only sub ex­po­nen­tial growth for AGI?

M. Y. Zuo13 Nov 2021 22:46 UTC
5 points
22 comments1 min readLW link

Com­ments on Car­l­smith’s “Is power-seek­ing AI an ex­is­ten­tial risk?”

So8res13 Nov 2021 4:29 UTC
138 points
15 comments40 min readLW link1 review

A FLI post­doc­toral grant ap­pli­ca­tion: AI al­ign­ment via causal anal­y­sis and de­sign of agents

PabloAMC13 Nov 2021 1:44 UTC
4 points
0 comments7 min readLW link

[Question] Is Func­tional De­ci­sion The­ory still an ac­tive area of re­search?

Grant Demaree13 Nov 2021 0:30 UTC
8 points
3 comments1 min readLW link

Aver­age prob­a­bil­ities, not log odds

AlexMennen12 Nov 2021 21:39 UTC
27 points
20 comments5 min readLW link

[linkpost] Crypto Cities

mike_hawke12 Nov 2021 21:26 UTC
25 points
10 comments1 min readLW link
(vitalik.ca)

A Defense of Func­tional De­ci­sion Theory

Heighn12 Nov 2021 20:59 UTC
21 points
221 comments10 min readLW link

Why I’m ex­cited about Red­wood Re­search’s cur­rent project

paulfchristiano12 Nov 2021 19:26 UTC
114 points
6 comments7 min readLW link

Stop but­ton: to­wards a causal solution

tailcalled12 Nov 2021 19:09 UTC
25 points
37 comments9 min readLW link

Ran­domWalkNFT: A Game The­ory Exercise

Annapurna12 Nov 2021 19:05 UTC
7 points
10 comments2 min readLW link

Preprint is out! 100,000 lu­mens to treat sea­sonal af­fec­tive disorder

Fabienne12 Nov 2021 17:59 UTC
169 points
10 comments1 min readLW link

ALERT⚠️ Not enough gud vibes 😎

Pee Doom12 Nov 2021 11:25 UTC
10 points
3 comments1 min readLW link

Avoid­ing Nega­tive Ex­ter­nal­ities—a the­ory with spe­cific ex­am­ples—Part 1

M. Y. Zuo12 Nov 2021 4:09 UTC
2 points
4 comments6 min readLW link

It’s Ok to Dance Again

jefftk12 Nov 2021 2:50 UTC
8 points
0 comments1 min readLW link
(www.jefftk.com)

Mea­sur­ing and Fore­cast­ing Risks from AI

jsteinhardt12 Nov 2021 2:30 UTC
24 points
0 comments3 min readLW link
(bounded-regret.ghost.io)

AGI is at least as far away as Nu­clear Fu­sion.

Logan Zoellner11 Nov 2021 21:33 UTC
0 points
8 comments1 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

Vitor11 Nov 2021 15:58 UTC
267 points
22 comments11 min readLW link1 review

Effec­tive Altru­ism Vir­tual Pro­grams Dec-Jan 2022

Yi-Yang11 Nov 2021 15:50 UTC
3 points
0 comments1 min readLW link

Covid 11/​11: Win­ter and Effec­tive Treat­ments Are Coming

Zvi11 Nov 2021 14:50 UTC
65 points
19 comments12 min readLW link
(thezvi.wordpress.com)

Us­ing blin­ders to help you see things for what they are

Adam Zerner11 Nov 2021 7:07 UTC
13 points
2 comments2 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJules11 Nov 2021 7:04 UTC
2 points
2 comments1 min readLW link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

11 Nov 2021 3:01 UTC
328 points
253 comments34 min readLW link1 review

Re­lax­ation-Based Search, From Every­day Life To Un­fa­mil­iar Territory

johnswentworth10 Nov 2021 21:47 UTC
58 points
3 comments8 min readLW link

[Question] Self-ed­u­ca­tion best practices

Sean McAneny10 Nov 2021 17:12 UTC
12 points
5 comments1 min readLW link

[Question] What ex­actly is GPT-3′s base ob­jec­tive?

Daniel Kokotajlo10 Nov 2021 0:57 UTC
60 points
14 comments2 min readLW link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

Writer9 Nov 2021 17:43 UTC
13 points
4 comments13 min readLW link
(youtu.be)

Come for the pro­duc­tivity, stay for the philosophy

lionhearted (Sebastian Marshall)9 Nov 2021 13:10 UTC
23 points
6 comments1 min readLW link

Erase button

Astor9 Nov 2021 9:39 UTC
3 points
6 comments1 min readLW link