[Question] What are the rea­sons to *not* con­sider re­duc­ing AI-Xrisk the high­est pri­or­ity cause?

David Scott Krueger (formerly: capybaralet)20 Aug 2019 21:45 UTC
29 points
27 comments1 min readLW link

[Question] Has Moore’s Law ac­tu­ally slowed down?

Matthew Barnett20 Aug 2019 19:18 UTC
14 points
7 comments1 min readLW link

Cere­bras Sys­tems un­veils a record 1.2 trillion tran­sis­tor chip for AI

avturchin20 Aug 2019 14:36 UTC
7 points
4 comments1 min readLW link
(venturebeat.com)

Lis­bon SSC Meetup #1

tamkin&popkin20 Aug 2019 12:20 UTC
1 point
0 comments1 min readLW link

Self-su­per­vised learn­ing & ma­nipu­la­tive predictions

Steven Byrnes20 Aug 2019 10:55 UTC
18 points
14 comments9 min readLW link

Nega­tive “eeny meeny miny moe”

jefftk20 Aug 2019 2:48 UTC
25 points
6 comments1 min readLW link

Why I Am Not a Technocrat

Spugpow20 Aug 2019 2:06 UTC
−3 points
4 comments1 min readLW link
(radicalxchange.org)

A mis­con­cep­tion about immigration

limerott19 Aug 2019 22:37 UTC
1 point
9 comments4 min readLW link
(limerott.com)

[Question] Do We Change Our Minds Less Often Than We Think?

Raemon19 Aug 2019 21:37 UTC
20 points
5 comments1 min readLW link

Clas­sify­ing speci­fi­ca­tion prob­lems as var­i­ants of Good­hart’s Law

Vika19 Aug 2019 20:40 UTC
72 points
5 comments5 min readLW link1 review

Unstriving

Jacob Falkovich19 Aug 2019 14:31 UTC
38 points
7 comments6 min readLW link

Good­hart’s Curse and Limi­ta­tions on AI Alignment

Gordon Seidoh Worley19 Aug 2019 7:57 UTC
25 points
18 comments10 min readLW link

Raph Koster on Vir­tual Wor­lds vs Games (notes)

Raemon18 Aug 2019 19:01 UTC
26 points
8 comments2 min readLW link

“Can We Sur­vive Tech­nol­ogy” by von Neumann

Ben Pace18 Aug 2019 18:58 UTC
32 points
2 comments1 min readLW link
(geosci.uchicago.edu)

Prokary­ote Mul­ti­verse. An ar­gu­ment that po­ten­tial simu­la­tors do not have sig­nifi­cantly more com­plex physics than ours

mako yass18 Aug 2019 4:22 UTC
0 points
5 comments2 min readLW link

Neu­ral Nets in Python 1

lifelonglearner18 Aug 2019 2:48 UTC
10 points
3 comments8 min readLW link

In­spec­tion Para­dox as a Driver of Group Separation

Shmi17 Aug 2019 21:47 UTC
29 points
0 comments1 min readLW link

South Bay Meetup

David Friedman17 Aug 2019 19:56 UTC
1 point
0 comments1 min readLW link

Prob­lems in AI Align­ment that philoso­phers could po­ten­tially con­tribute to

Wei Dai17 Aug 2019 17:38 UTC
79 points
14 comments2 min readLW link

[Question] How can you use mu­sic to boost learn­ing?

Matthew Barnett17 Aug 2019 6:59 UTC
11 points
1 comment1 min readLW link

A Primer on Ma­trix Calcu­lus, Part 3: The Chain Rule

Matthew Barnett17 Aug 2019 1:50 UTC
12 points
4 comments6 min readLW link

Nashville SSC Septem­ber Meetup

friedelcraftiness16 Aug 2019 15:16 UTC
1 point
0 comments1 min readLW link

Beliefs Are For True Things

Davis_Kingsley15 Aug 2019 23:23 UTC
8 points
5 comments3 min readLW link

[Question] What ex­per­i­ments would demon­strate “up­per limits of aug­mented work­ing mem­ory?”

Raemon15 Aug 2019 22:09 UTC
33 points
6 comments2 min readLW link

Clar­ify­ing some key hy­pothe­ses in AI alignment

15 Aug 2019 21:29 UTC
79 points
12 comments9 min readLW link

Tesser­cube — OpenPGP Made Mobile

Suji Yan15 Aug 2019 9:34 UTC
4 points
0 comments1 min readLW link

A Primer on Ma­trix Calcu­lus, Part 2: Ja­co­bi­ans and other fun

Matthew Barnett15 Aug 2019 1:13 UTC
22 points
7 comments7 min readLW link

Par­tial sum­mary of de­bate with Ben­quo and Jes­si­cata [pt 1]

Raemon14 Aug 2019 20:02 UTC
89 points
63 comments22 min readLW link3 reviews

“De­sign­ing agent in­cen­tives to avoid re­ward tam­per­ing”, DeepMind

gwern14 Aug 2019 16:57 UTC
28 points
15 comments1 min readLW link
(medium.com)

Subagents, trauma and rationality

Kaj_Sotala14 Aug 2019 13:14 UTC
111 points
4 comments19 min readLW link

Pre­dicted AI al­ign­ment event/​meet­ing calendar

rmoehn14 Aug 2019 7:14 UTC
29 points
14 comments1 min readLW link

Nat­u­ral laws should be ex­plicit con­straints on strat­egy space

ryan_b13 Aug 2019 20:22 UTC
8 points
6 comments1 min readLW link

Dis­tance Func­tions are Hard

Grue_Slinky13 Aug 2019 17:33 UTC
31 points
19 comments6 min readLW link

Book Re­view: Sec­u­lar Cycles

Scott Alexander13 Aug 2019 4:10 UTC
62 points
10 comments16 min readLW link1 review
(slatestarcodex.com)

A Primer on Ma­trix Calcu­lus, Part 1: Ba­sic review

Matthew Barnett12 Aug 2019 23:44 UTC
25 points
4 comments7 min readLW link

[Question] What ex­plana­tory power does Kah­ne­man’s Sys­tem 2 pos­sess?

Richard_Ngo12 Aug 2019 15:23 UTC
31 points
2 comments1 min readLW link

Mesa-Op­ti­miz­ers and Over-op­ti­miza­tion Failure (Op­ti­miz­ing and Good­hart Effects, Clar­ify­ing Thoughts—Part 4)

Davidmanheim12 Aug 2019 8:07 UTC
15 points
3 comments4 min readLW link

Ad­jec­tives from the Fu­ture: The Dangers of Re­sult-based Descriptions

Pradeep_Kumar11 Aug 2019 19:19 UTC
19 points
8 comments11 min readLW link

[Question] Could we solve this email mess if we all moved to paid emails?

jacobjacob11 Aug 2019 16:31 UTC
29 points
50 comments4 min readLW link

AI Safety Read­ing Group

Søren Elverlin11 Aug 2019 9:01 UTC
16 points
8 comments1 min readLW link

[Question] Does hu­man choice have to be tran­si­tive in or­der to be ra­tio­nal/​con­sis­tent?

jmh11 Aug 2019 1:49 UTC
9 points
6 comments1 min readLW link

Di­ana Fleischman and Ge­offrey Miller—Au­di­ence Q&A

Jacob Falkovich10 Aug 2019 22:37 UTC
38 points
6 comments9 min readLW link

In­tran­si­tive Prefer­ences You Can’t Pump

zulupineapple9 Aug 2019 23:10 UTC
0 points
2 comments1 min readLW link

Cat­e­go­rial prefer­ences and util­ity functions

DavidHolmes9 Aug 2019 21:36 UTC
10 points
6 comments5 min readLW link

[Question] What is the state of the ego de­ple­tion field?

Eli Tyre9 Aug 2019 20:30 UTC
27 points
10 comments1 min readLW link

Why Gra­di­ents Van­ish and Explode

Matthew Barnett9 Aug 2019 2:54 UTC
25 points
9 comments3 min readLW link

AI Fore­cast­ing Dic­tionary (Fore­cast­ing in­fras­truc­ture, part 1)

8 Aug 2019 16:10 UTC
50 points
0 comments5 min readLW link

[Question] Why do hu­mans not have built-in neu­ral i/​o chan­nels?

Richard_Ngo8 Aug 2019 13:09 UTC
25 points
23 comments1 min readLW link

Which of these five AI al­ign­ment re­search pro­jects ideas are no good?

rmoehn8 Aug 2019 7:17 UTC
25 points
13 comments1 min readLW link

Cal­ibrat­ing With Cards

lifelonglearner8 Aug 2019 6:44 UTC
32 points
3 comments3 min readLW link