AGI is at least as far away as Nu­clear Fu­sion.

Logan Zoellner11 Nov 2021 21:33 UTC
0 points
8 comments1 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

Vitor11 Nov 2021 15:58 UTC
267 points
22 comments11 min readLW link1 review

Effec­tive Altru­ism Vir­tual Pro­grams Dec-Jan 2022

Yi-Yang11 Nov 2021 15:50 UTC
3 points
0 comments1 min readLW link

Covid 11/​11: Win­ter and Effec­tive Treat­ments Are Coming

Zvi11 Nov 2021 14:50 UTC
65 points
19 comments12 min readLW link
(thezvi.wordpress.com)

Us­ing blin­ders to help you see things for what they are

Adam Zerner11 Nov 2021 7:07 UTC
13 points
2 comments2 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJules11 Nov 2021 7:04 UTC
2 points
2 comments1 min readLW link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

11 Nov 2021 3:01 UTC
328 points
253 comments34 min readLW link1 review

Re­lax­ation-Based Search, From Every­day Life To Un­fa­mil­iar Territory

johnswentworth10 Nov 2021 21:47 UTC
58 points
3 comments8 min readLW link

[Question] Self-ed­u­ca­tion best practices

Sean McAneny10 Nov 2021 17:12 UTC
12 points
5 comments1 min readLW link

[Question] What ex­actly is GPT-3′s base ob­jec­tive?

Daniel Kokotajlo10 Nov 2021 0:57 UTC
60 points
14 comments2 min readLW link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

Writer9 Nov 2021 17:43 UTC
13 points
4 comments13 min readLW link
(youtu.be)

Come for the pro­duc­tivity, stay for the philosophy

lionhearted (Sebastian Marshall)9 Nov 2021 13:10 UTC
23 points
6 comments1 min readLW link

Erase button

Astor9 Nov 2021 9:39 UTC
3 points
6 comments1 min readLW link

Ar­gu­ments in par­allel vs ar­gu­ments in series

Sunny from QAD9 Nov 2021 8:31 UTC
11 points
8 comments2 min readLW link

Where did the 5 micron num­ber come from? Nowhere good. [Wired.com]

Elizabeth9 Nov 2021 7:14 UTC
108 points
8 comments1 min readLW link1 review
(www.wired.com)

In Defence of Op­ti­miz­ing Rou­tine Tasks

leogao9 Nov 2021 5:09 UTC
47 points
6 comments3 min readLW link1 review

[Question] Is there a clearly laid-out write-up of the case to drop Covid pre­cau­tions?

NoSignalNoNoise9 Nov 2021 2:46 UTC
46 points
3 comments1 min readLW link

Pos­si­ble re­search di­rec­tions to im­prove the mechanis­tic ex­pla­na­tion of neu­ral networks

delton1379 Nov 2021 2:36 UTC
31 points
8 comments9 min readLW link

ACX/​SSC/​LW Meetup

Sean Aubin9 Nov 2021 2:27 UTC
11 points
0 comments1 min readLW link

How do we be­come con­fi­dent in the safety of a ma­chine learn­ing sys­tem?

evhub8 Nov 2021 22:49 UTC
133 points
5 comments31 min readLW link

Steel­man soli­taire: how to take play­ing devil’s ad­vo­cate to the next level

KatWoods8 Nov 2021 20:49 UTC
63 points
2 comments5 min readLW link

Ex­cerpts from Veyne’s “Did the Greeks Believe in Their Myths?”

Rob Bensinger8 Nov 2021 20:23 UTC
24 points
1 comment16 min readLW link

Worth check­ing your stock trad­ing skills

at_the_zoo8 Nov 2021 19:19 UTC
48 points
37 comments3 min readLW link

What are red flags for Neu­ral Net­work suffer­ing?

Marius Hobbhahn8 Nov 2021 12:51 UTC
29 points
15 comments12 min readLW link

Tran­script for Ge­off An­ders and Anna Sala­mon’s Oct. 23 conversation

Rob Bensinger8 Nov 2021 2:19 UTC
83 points
97 comments58 min readLW link

[Question] How much of the sup­ply-chain is­sues are due to mon­e­tary policy?

ChristianKl7 Nov 2021 21:21 UTC
11 points
3 comments1 min readLW link

There Meat Come A Scan­dal...

Nicholas / Heather Kross7 Nov 2021 20:52 UTC
31 points
7 comments3 min readLW link
(www.thinkingmuchbetter.com)

D&D.Sci Dun­geon­crawl­ing: The Crown of Command

aphyer7 Nov 2021 18:39 UTC
36 points
27 comments4 min readLW link

ACX At­lanta Novem­ber Meetup—Novem­ber 13th

Steve French7 Nov 2021 18:37 UTC
1 point
0 comments1 min readLW link

You Don’t Need An­throp­ics To Do Science

dadadarren7 Nov 2021 15:07 UTC
6 points
4 comments2 min readLW link
(www.sleepingbeautyproblem.com)

High­light­ing New Comments

jefftk7 Nov 2021 12:50 UTC
14 points
1 comment1 min readLW link
(www.jefftk.com)

Us­ing Brain-Com­puter In­ter­faces to get more data for AI alignment

Robbo7 Nov 2021 0:00 UTC
43 points
10 comments7 min readLW link

South Bay LW Pilot Meetup (Sun­ny­vale)

IS6 Nov 2021 20:20 UTC
19 points
0 comments1 min readLW link

App and book recom­men­da­tions for peo­ple who want to be hap­pier and more productive

KatWoods6 Nov 2021 17:40 UTC
141 points
43 comments8 min readLW link

Chu are you?

Adele Lopez6 Nov 2021 17:39 UTC
60 points
9 comments9 min readLW link
(adelelopez.com)

Sub­stack Ho?

Zvi6 Nov 2021 16:50 UTC
27 points
17 comments4 min readLW link
(thezvi.wordpress.com)

CFAR, re­spon­si­bil­ity and bureaucracy

ChristianKl6 Nov 2021 14:53 UTC
22 points
1 comment8 min readLW link

Speak­ing of Stag Hunts

Duncan Sabien (Deactivated)6 Nov 2021 8:20 UTC
191 points
373 comments18 min readLW link

Con­cen­tra­tion of Force

Duncan Sabien (Deactivated)6 Nov 2021 8:20 UTC
240 points
23 comments12 min readLW link1 review

Study Guide

johnswentworth6 Nov 2021 1:23 UTC
291 points
48 comments16 min readLW link

Night­clubs in Heaven?

J Bostock5 Nov 2021 23:28 UTC
10 points
3 comments2 min readLW link

Com­ments on OpenPhil’s In­ter­pretabil­ity RFP

paulfchristiano5 Nov 2021 22:36 UTC
91 points
5 comments7 min readLW link

How should we com­pare neu­ral net­work rep­re­sen­ta­tions?

jsteinhardt5 Nov 2021 22:10 UTC
24 points
0 comments3 min readLW link
(bounded-regret.ghost.io)

Drug ad­dicts and de­cep­tively al­igned agents—a com­par­a­tive analysis

Jan5 Nov 2021 21:42 UTC
42 points
2 comments12 min readLW link
(universalprior.substack.com)

Model­ing the im­pact of safety agendas

Ben Cottier5 Nov 2021 19:46 UTC
51 points
6 comments10 min readLW link

[Question] Sum­mary of the se­quences /​ Les­son plans for rationality

Space L Clottey5 Nov 2021 17:22 UTC
5 points
4 comments1 min readLW link

[Ex­ter­nal Event] 2022 IEEE In­ter­na­tional Con­fer­ence on As­sured Au­ton­omy (ICAA) - sub­mis­sion dead­line extended

Aryeh Englander5 Nov 2021 15:29 UTC
13 points
0 comments3 min readLW link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

ozziegooen5 Nov 2021 9:05 UTC
46 points
10 comments3 min readLW link
(forum.effectivealtruism.org)

Y2K: Suc­cess­ful Prac­tice for AI Alignment

Darmani5 Nov 2021 6:09 UTC
49 points
5 comments6 min readLW link

[Question] How does one learn to cre­ate mod­els?

Conor5 Nov 2021 2:57 UTC
3 points
1 comment1 min readLW link