In­cor­rect hy­pothe­ses point to cor­rect observations

Kaj_SotalaNov 20, 2018, 9:10 PM
169 points
40 comments4 min readLW link
(kajsotala.fi)

Preschool: Much Less Than You Wanted To Know

ZviNov 20, 2018, 7:30 PM
65 points
15 comments2 min readLW link
(thezvi.wordpress.com)

New safety re­search agenda: scal­able agent al­ign­ment via re­ward modeling

VikaNov 20, 2018, 5:29 PM
34 points
12 comments1 min readLW link
(medium.com)

Pro­saic AI alignment

paulfchristianoNov 20, 2018, 1:56 PM
48 points
10 comments8 min readLW link

Moscow LW meetup in “Nauchka” library

Alexander230Nov 20, 2018, 12:19 PM
2 points
0 comments1 min readLW link

[Insert clever in­tro here]

Bae's TheoremNov 20, 2018, 3:26 AM
18 points
13 comments1 min readLW link

Align­ment Newslet­ter #33

Rohin ShahNov 19, 2018, 5:20 PM
23 points
0 comments9 min readLW link
(mailchi.mp)

Games in Kocherga club: Fal­la­cy­ma­nia, Tower of Chaos, Scien­tific Discovery

Alexander230Nov 19, 2018, 2:23 PM
2 points
0 comments1 min readLW link

Let­ting Others Be Vulnerable

lifelonglearnerNov 19, 2018, 2:59 AM
34 points
6 comments7 min readLW link

Click­bait might not be de­stroy­ing our gen­eral Intelligence

Donald HobsonNov 19, 2018, 12:13 AM
25 points
13 comments2 min readLW link

South Bay Meetup 12/​8

DavidFriedmanNov 19, 2018, 12:04 AM
3 points
0 comments1 min readLW link

[Link] “They go to­gether: Free­dom, Pros­per­ity, and Big Govern­ment”

CronoDASNov 18, 2018, 4:51 PM
11 points
3 comments1 min readLW link

Col­lab­o­ra­tion-by-De­sign ver­sus Emer­gent Collaboration

DavidmanheimNov 18, 2018, 7:22 AM
11 points
2 comments2 min readLW link

Di­ag­o­nal­iza­tion Fixed Point Exercises

Nov 18, 2018, 12:31 AM
40 points
25 comments3 min readLW link

Ia! Ia! Ex­tradi­men­sional Cephalo­pod Nafl’fh­tagn!

ExCephNov 17, 2018, 11:00 PM
14 points
5 comments1 min readLW link

Effec­tive Altru­ism, YouTube, and AI (talk by Lê Nguyên Hoang)

Paperclip MinimizerNov 17, 2018, 7:21 PM
3 points
0 commentsLW link
(www.youtube.com)

An un­al­igned benchmark

paulfchristianoNov 17, 2018, 3:51 PM
31 points
0 comments9 min readLW link

On Ri­gor­ous Er­ror Handling

Martin SustrikNov 17, 2018, 9:20 AM
13 points
4 comments6 min readLW link
(250bpm.com)

Act of Charity

jessicataNov 17, 2018, 5:19 AM
186 points
49 comments8 min readLW link1 review

Topolog­i­cal Fixed Point Exercises

Nov 17, 2018, 1:40 AM
71 points
51 comments3 min readLW link

Fixed Point Exercises

Scott GarrabrantNov 17, 2018, 1:39 AM
64 points
8 comments2 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer YudkowskyNov 16, 2018, 11:06 PM
208 points
67 comments5 min readLW link2 reviews

Di­men­sional re­gret with­out resets

Vanessa KosoyNov 16, 2018, 7:22 PM
11 points
0 comments12 min readLW link

His­tory of LessWrong: Some Data Graphics

Said AchmizNov 16, 2018, 7:07 AM
65 points
18 comments1 min readLW link

Sam Har­ris and the Is–Ought Gap

Tyrrell_McAllisterNov 16, 2018, 1:04 AM
91 points
46 comments6 min readLW link

Embed­ded Agency (full-text ver­sion)

Nov 15, 2018, 7:49 PM
208 points
17 comments54 min readLW link

Switch­ing host­ing providers to­day, there prob­a­bly will be some hiccups

habrykaNov 15, 2018, 7:45 PM
12 points
0 comments1 min readLW link

Clar­ify­ing “AI Align­ment”

paulfchristianoNov 15, 2018, 2:41 PM
67 points
84 comments3 min readLW link2 reviews

The In­spec­tion Para­dox is Everywhere

Chris_LeongNov 15, 2018, 10:55 AM
24 points
3 comments1 min readLW link
(allendowney.blogspot.com)

SSC At­lanta Meetup Num­ber 3

Steve FrenchNov 15, 2018, 5:19 AM
2 points
0 commentsLW link

Ex­plore/​Ex­ploit for Conversations

HazardNov 15, 2018, 4:11 AM
39 points
2 comments5 min readLW link

Wash­ing­ton, DC LW Meetup

rusalkiiNov 15, 2018, 2:13 AM
1 point
0 comments1 min readLW link

Sto­icism: Cau­tion­ary Advice

VivaLaPandaNov 14, 2018, 11:18 PM
41 points
16 comments3 min readLW link

Manda­tory Obsessions

Jacob FalkovichNov 14, 2018, 6:19 PM
77 points
14 comments6 min readLW link

Pub meetup: Devel­op­men­tal Milestones

GilesNov 14, 2018, 4:18 AM
6 points
0 comments1 min readLW link

Deck Guide: Burn­ing Drakes

ZviNov 13, 2018, 7:40 PM
8 points
0 comments12 min readLW link
(thezvi.wordpress.com)

Ac­knowl­edg­ing Hu­man Prefer­ence Types to Sup­port Value Learning

NandiNov 13, 2018, 6:57 PM
34 points
4 comments9 min readLW link

The Steer­ing Problem

paulfchristianoNov 13, 2018, 5:14 PM
44 points
12 comments7 min readLW link

Post-Ra­tion­al­ity and Ra­tion­al­ity, A Dialogue

agilecavemanNov 13, 2018, 5:55 AM
2 points
2 comments10 min readLW link

Laugh­ing Away the Lit­tle Miseries

RossinNov 13, 2018, 3:31 AM
12 points
7 comments2 min readLW link

Kelly bettors

DanielFilanNov 13, 2018, 12:40 AM
24 points
3 comments10 min readLW link
(danielfilan.com)

Wire­head­ing as a Pos­si­ble Con­trib­u­tor to Civ­i­liza­tional Decline

avturchinNov 12, 2018, 8:33 PM
3 points
6 commentsLW link
(forum.effectivealtruism.org)

Align­ment Newslet­ter #32

Rohin ShahNov 12, 2018, 5:20 PM
18 points
0 comments12 min readLW link
(mailchi.mp)

AI de­vel­op­ment in­cen­tive gra­di­ents are not uniformly terrible

rkNov 12, 2018, 4:27 PM
21 points
12 comments6 min readLW link

What is be­ing?

Andrew BindonNov 12, 2018, 3:33 PM
−14 points
20 comments7 min readLW link

Aligned AI, The Scientist

ShmiNov 12, 2018, 6:36 AM
12 points
2 comments1 min readLW link

Com­bat vs Nur­ture: Cul­tural Genesis

RubyNov 12, 2018, 2:11 AM
35 points
12 comments6 min readLW link

Ra­tion­al­ity Is Not Sys­tem­atized Winning

namespaceNov 11, 2018, 10:05 PM
36 points
20 comments1 min readLW link
(www.thelastrationalist.com)

“She Wanted It”

sarahconstantinNov 11, 2018, 10:00 PM
120 points
21 comments7 min readLW link
(srconstantin.wordpress.com)

Fu­ture di­rec­tions for am­bi­tious value learning

Rohin ShahNov 11, 2018, 3:53 PM
48 points
9 comments4 min readLW link