Sugges­tion: New ma­te­rial shouldn’t be re­leased too fast

Chris_Leong21 Nov 2018 16:39 UTC
23 points
7 comments1 min readLW link

EA Bris­tol Strat­egy Meet­ing

thegreatnick21 Nov 2018 10:57 UTC
1 point
0 comments1 min readLW link

Ra­tion­al­ity Café No. 6 - The Se­quences, Part 1; Sec­tion B Repeat

thegreatnick21 Nov 2018 10:54 UTC
8 points
2 comments1 min readLW link

EA Funds: Long-Term Fu­ture fund is open to ap­pli­ca­tions un­til Novem­ber 24th (this Satur­day)

habryka21 Nov 2018 3:39 UTC
37 points
0 comments1 min readLW link

In­cor­rect hy­pothe­ses point to cor­rect observations

Kaj_Sotala20 Nov 2018 21:10 UTC
167 points
40 comments4 min readLW link
(kajsotala.fi)

Preschool: Much Less Than You Wanted To Know

Zvi20 Nov 2018 19:30 UTC
65 points
15 comments2 min readLW link
(thezvi.wordpress.com)

New safety re­search agenda: scal­able agent al­ign­ment via re­ward modeling

Vika20 Nov 2018 17:29 UTC
34 points
12 comments1 min readLW link
(medium.com)

Pro­saic AI alignment

paulfchristiano20 Nov 2018 13:56 UTC
47 points
10 comments8 min readLW link

Moscow LW meetup in “Nauchka” library

Alexander23020 Nov 2018 12:19 UTC
2 points
0 comments1 min readLW link

[Insert clever in­tro here]

Bae's Theorem20 Nov 2018 3:26 UTC
18 points
13 comments1 min readLW link

Align­ment Newslet­ter #33

Rohin Shah19 Nov 2018 17:20 UTC
23 points
0 comments9 min readLW link
(mailchi.mp)

Games in Kocherga club: Fal­la­cy­ma­nia, Tower of Chaos, Scien­tific Discovery

Alexander23019 Nov 2018 14:23 UTC
2 points
0 comments1 min readLW link

Let­ting Others Be Vulnerable

lifelonglearner19 Nov 2018 2:59 UTC
33 points
6 comments7 min readLW link

Click­bait might not be de­stroy­ing our gen­eral Intelligence

Donald Hobson19 Nov 2018 0:13 UTC
25 points
13 comments2 min readLW link

South Bay Meetup 12/​8

DavidFriedman19 Nov 2018 0:04 UTC
3 points
0 comments1 min readLW link

[Link] “They go to­gether: Free­dom, Pros­per­ity, and Big Govern­ment”

CronoDAS18 Nov 2018 16:51 UTC
11 points
3 comments1 min readLW link

Col­lab­o­ra­tion-by-De­sign ver­sus Emer­gent Collaboration

Davidmanheim18 Nov 2018 7:22 UTC
11 points
2 comments2 min readLW link

Di­ag­o­nal­iza­tion Fixed Point Exercises

18 Nov 2018 0:31 UTC
40 points
25 comments3 min readLW link

Ia! Ia! Ex­tradi­men­sional Cephalo­pod Nafl’fh­tagn!

ExCeph17 Nov 2018 23:00 UTC
14 points
5 comments1 min readLW link

Effec­tive Altru­ism, YouTube, and AI (talk by Lê Nguyên Hoang)

Paperclip Minimizer17 Nov 2018 19:21 UTC
3 points
0 comments1 min readLW link
(www.youtube.com)

An un­al­igned benchmark

paulfchristiano17 Nov 2018 15:51 UTC
31 points
0 comments9 min readLW link

On Ri­gor­ous Er­ror Handling

Martin Sustrik17 Nov 2018 9:20 UTC
13 points
4 comments6 min readLW link
(250bpm.com)

Act of Charity

jessicata17 Nov 2018 5:19 UTC
186 points
49 comments8 min readLW link1 review

Topolog­i­cal Fixed Point Exercises

17 Nov 2018 1:40 UTC
71 points
51 comments3 min readLW link

Fixed Point Exercises

Scott Garrabrant17 Nov 2018 1:39 UTC
64 points
8 comments2 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
191 points
65 comments5 min readLW link2 reviews

Di­men­sional re­gret with­out resets

Vanessa Kosoy16 Nov 2018 19:22 UTC
11 points
0 comments12 min readLW link

His­tory of LessWrong: Some Data Graphics

Said Achmiz16 Nov 2018 7:07 UTC
65 points
18 comments1 min readLW link

Sam Har­ris and the Is–Ought Gap

Tyrrell_McAllister16 Nov 2018 1:04 UTC
90 points
46 comments6 min readLW link

Embed­ded Agency (full-text ver­sion)

15 Nov 2018 19:49 UTC
201 points
17 comments54 min readLW link

Switch­ing host­ing providers to­day, there prob­a­bly will be some hiccups

habryka15 Nov 2018 19:45 UTC
12 points
0 comments1 min readLW link

Clar­ify­ing “AI Align­ment”

paulfchristiano15 Nov 2018 14:41 UTC
66 points
84 comments3 min readLW link2 reviews

The In­spec­tion Para­dox is Everywhere

Chris_Leong15 Nov 2018 10:55 UTC
24 points
3 comments1 min readLW link
(allendowney.blogspot.com)

SSC At­lanta Meetup Num­ber 3

Steve French15 Nov 2018 5:19 UTC
2 points
0 comments1 min readLW link

Ex­plore/​Ex­ploit for Conversations

Hazard15 Nov 2018 4:11 UTC
38 points
2 comments5 min readLW link

Wash­ing­ton, DC LW Meetup

rusalkii15 Nov 2018 2:13 UTC
1 point
0 comments1 min readLW link

Sto­icism: Cau­tion­ary Advice

VivaLaPanda14 Nov 2018 23:18 UTC
40 points
17 comments3 min readLW link

Manda­tory Obsessions

Jacob Falkovich14 Nov 2018 18:19 UTC
77 points
14 comments6 min readLW link

Pub meetup: Devel­op­men­tal Milestones

Giles14 Nov 2018 4:18 UTC
6 points
0 comments1 min readLW link

Deck Guide: Burn­ing Drakes

Zvi13 Nov 2018 19:40 UTC
8 points
0 comments12 min readLW link
(thezvi.wordpress.com)

Ac­knowl­edg­ing Hu­man Prefer­ence Types to Sup­port Value Learning

Nandi13 Nov 2018 18:57 UTC
34 points
4 comments9 min readLW link

The Steer­ing Problem

paulfchristiano13 Nov 2018 17:14 UTC
43 points
12 comments7 min readLW link

Post-Ra­tion­al­ity and Ra­tion­al­ity, A Dialogue

agilecaveman13 Nov 2018 5:55 UTC
2 points
2 comments10 min readLW link

Laugh­ing Away the Lit­tle Miseries

Rossin13 Nov 2018 3:31 UTC
11 points
7 comments2 min readLW link

Kelly bettors

DanielFilan13 Nov 2018 0:40 UTC
24 points
3 comments10 min readLW link
(danielfilan.com)

Wire­head­ing as a Pos­si­ble Con­trib­u­tor to Civ­i­liza­tional Decline

avturchin12 Nov 2018 20:33 UTC
3 points
6 comments1 min readLW link
(forum.effectivealtruism.org)

Align­ment Newslet­ter #32

Rohin Shah12 Nov 2018 17:20 UTC
18 points
0 comments12 min readLW link
(mailchi.mp)

AI de­vel­op­ment in­cen­tive gra­di­ents are not uniformly terrible

rk12 Nov 2018 16:27 UTC
21 points
12 comments6 min readLW link

What is be­ing?

Andrew Bindon12 Nov 2018 15:33 UTC
−14 points
20 comments7 min readLW link

Aligned AI, The Scientist

Shmi12 Nov 2018 6:36 UTC
12 points
2 comments1 min readLW link