AGI safety from first prin­ci­ples: Conclusion

Richard_Ngo4 Oct 2020 23:06 UTC
71 points
7 comments3 min readLW link

[Link] Faster than Light in Our Model of Physics: Some Pre­limi­nary Thoughts—Stephen Wolfram Writings

Kenny4 Oct 2020 20:26 UTC
1 point
14 comments1 min readLW link

[Question] What re­acts would you like to be able to give on posts? (emoti­cons, cog­ni­cons, and more)

Mati_Roy4 Oct 2020 18:31 UTC
18 points
27 comments1 min readLW link

[Question] One hub, or many?

abramdemski4 Oct 2020 16:58 UTC
26 points
12 comments1 min readLW link

AI race con­sid­er­a­tions in a re­port by the U.S. House Com­mit­tee on Armed Services

NunoSempere4 Oct 2020 12:11 UTC
42 points
4 comments13 min readLW link

Public trans­mit metta

Kaj_Sotala4 Oct 2020 11:40 UTC
29 points
6 comments2 min readLW link
(kajsotala.fi)

[Question] Is there any work on in­cor­po­rat­ing aleatoric un­cer­tainty and/​or in­her­ent ran­dom­ness into AIXI?

David Scott Krueger (formerly: capybaralet)4 Oct 2020 8:10 UTC
9 points
7 comments1 min readLW link

Fish­bowl re­al­ity

Elo4 Oct 2020 2:16 UTC
11 points
0 comments2 min readLW link

Post­mortem to Petrov Day, 2020

Ben Pace3 Oct 2020 21:30 UTC
97 points
63 comments9 min readLW link

Weird Things About Money

abramdemski3 Oct 2020 17:13 UTC
79 points
31 comments6 min readLW link

On Op­tion Paral­y­sis—The Thing You Ac­tu­ally Do

Neel Nanda3 Oct 2020 11:50 UTC
11 points
5 comments6 min readLW link
(www.neelnanda.io)

AGI safety from first prin­ci­ples: Control

Richard_Ngo2 Oct 2020 21:51 UTC
60 points
6 comments9 min readLW link

Mut­ing on Group Calls

jefftk2 Oct 2020 20:30 UTC
12 points
2 comments1 min readLW link
(www.jefftk.com)

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_Sotala2 Oct 2020 11:50 UTC
46 points
1 comment3 min readLW link
(kajsotala.fi)

Math That Clicks: Look for Two-Way Correspondences

TurnTrout2 Oct 2020 1:22 UTC
36 points
4 comments3 min readLW link

A sim­ple de­vice for in­door air management

Richard Korzekwa 2 Oct 2020 1:02 UTC
49 points
10 comments3 min readLW link

Sun­day Meetup: Work­shop on On­line Sur­veys with Spencer Green­berg

Raemon2 Oct 2020 0:34 UTC
27 points
5 comments1 min readLW link

Linkpost: Choice Ex­plains Pos­i­tivity and Con­fir­ma­tion Bias

Gunnar_Zarncke1 Oct 2020 21:46 UTC
8 points
0 comments1 min readLW link

Open & Wel­come Thread – Oc­to­ber 2020

Ben Pace1 Oct 2020 19:06 UTC
14 points
54 comments1 min readLW link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

paulfchristiano1 Oct 2020 18:54 UTC
206 points
13 comments3 min readLW link

Covid 10/​1: The Long Haul

Zvi1 Oct 2020 18:00 UTC
95 points
22 comments9 min readLW link
(thezvi.wordpress.com)

Words and Implications

johnswentworth1 Oct 2020 17:37 UTC
62 points
25 comments8 min readLW link

Your Stan­dards are Too High

Neel Nanda1 Oct 2020 17:03 UTC
23 points
2 comments14 min readLW link
(neelnanda.io)

Three car seats?

jefftk1 Oct 2020 14:30 UTC
18 points
9 comments1 min readLW link
(www.jefftk.com)

Fore­cast­ing Newslet­ter: Septem­ber 2020.

NunoSempere1 Oct 2020 11:00 UTC
21 points
3 comments11 min readLW link

[Question] Bab­ble challenge: 50 ways of send­ing some­thing to the moon

jacobjacob1 Oct 2020 4:20 UTC
94 points
114 comments2 min readLW link1 review

AGI safety from first prin­ci­ples: Alignment

Richard_Ngo1 Oct 2020 3:13 UTC
60 points
3 comments13 min readLW link

How to not be an alarmist

DirectedEvolution30 Sep 2020 21:35 UTC
8 points
2 comments2 min readLW link

[Question] Com­pe­tence vs Alignment

Ariel Kwiatkowski30 Sep 2020 21:03 UTC
7 points
4 comments1 min readLW link

“Zero Sum” is a mis­nomer.

abramdemski30 Sep 2020 18:25 UTC
120 points
34 comments6 min readLW link

Eval­u­at­ing Life Ex­ten­sion Ad­vo­cacy Foundation

emanuele ascani30 Sep 2020 18:04 UTC
7 points
7 comments5 min readLW link

[AN #119]: AI safety when agents are shaped by en­vi­ron­ments, not rewards

Rohin Shah30 Sep 2020 17:10 UTC
11 points
0 comments11 min readLW link
(mailchi.mp)

Learn­ing how to learn

Neel Nanda30 Sep 2020 16:50 UTC
38 points
0 comments15 min readLW link
(www.neelnanda.io)

In­dus­trial literacy

jasoncrawford30 Sep 2020 16:39 UTC
306 points
130 comments3 min readLW link
(rootsofprogress.org)

Ja­son Crawford on the non-lin­ear model of in­no­va­tion: SSC On­line Meetup

JoshuaFox30 Sep 2020 10:13 UTC
7 points
1 comment1 min readLW link

Holy Grails of Chemistry

chemslug30 Sep 2020 2:03 UTC
34 points
2 comments1 min readLW link

“Un­su­per­vised” trans­la­tion as an (in­tent) al­ign­ment problem

paulfchristiano30 Sep 2020 0:50 UTC
61 points
15 comments4 min readLW link
(ai-alignment.com)

[Question] Ex­am­ples of self-gov­er­nance to re­duce tech­nol­ogy risk?

Jia29 Sep 2020 19:31 UTC
10 points
4 comments1 min readLW link

AGI safety from first prin­ci­ples: Goals and Agency

Richard_Ngo29 Sep 2020 19:06 UTC
76 points
15 comments15 min readLW link

Seek Up­side Risk

Neel Nanda29 Sep 2020 16:47 UTC
20 points
6 comments9 min readLW link
(www.neelnanda.io)

Do­ing dis­course bet­ter: Stuff I wish I knew

dynomight29 Sep 2020 14:34 UTC
27 points
11 comments1 min readLW link
(dyno-might.github.io)

David Fried­man on Le­gal Sys­tems Very Differ­ent from Ours: SlateS­tarCodex On­line Meetup

JoshuaFox29 Sep 2020 11:18 UTC
10 points
1 comment1 min readLW link

Read­ing Dis­cus­sion Group

NoSignalNoNoise29 Sep 2020 3:59 UTC
6 points
0 comments1 min readLW link

Cambridge Vir­tual LW/​SSC Meetup

NoSignalNoNoise29 Sep 2020 3:42 UTC
6 points
0 comments1 min readLW link

AGI safety from first prin­ci­ples: Superintelligence

Richard_Ngo28 Sep 2020 19:53 UTC
87 points
8 comments9 min readLW link

AGI safety from first prin­ci­ples: Introduction

Richard_Ngo28 Sep 2020 19:53 UTC
128 points
18 comments2 min readLW link1 review

[Question] is scope in­sen­si­tivity re­ally a brain er­ror?

Kaarlo Tuomi28 Sep 2020 18:37 UTC
4 points
15 comments1 min readLW link

[Question] What De­ci­sion The­ory is Im­plied By Pre­dic­tive Pro­cess­ing?

johnswentworth28 Sep 2020 17:20 UTC
56 points
17 comments1 min readLW link

[Question] What are ex­am­ples of Ra­tion­al­ist fable-like sto­ries?

Mati_Roy28 Sep 2020 16:52 UTC
19 points
42 comments1 min readLW link

Macro-Procrastination

Neel Nanda28 Sep 2020 16:07 UTC
9 points
0 comments9 min readLW link
(www.neelnanda.io)