Re­place your­self be­fore you stop or­ga­niz­ing your com­mu­nity.

Raemon22 Jul 2018 20:57 UTC
65 points
16 comments4 min readLW link

Who Wants The Job?

Zvi22 Jul 2018 14:00 UTC
24 points
29 comments2 min readLW link
(thezvi.wordpress.com)

Sim­pli­cio and Sophisticus

Zvi22 Jul 2018 13:30 UTC
42 points
1 comment4 min readLW link
(thezvi.wordpress.com)

Ex­or­ciz­ing the Speed Prior?

abramdemski22 Jul 2018 6:45 UTC
14 points
6 comments3 min readLW link

12 Virtues of Ra­tion­al­ity posters/​icons

habryka22 Jul 2018 5:19 UTC
59 points
8 comments1 min readLW link

Bayesian Rea­son­ing with Un­song Theod­icy means we shouldn’t de­stroy the universe

pku22 Jul 2018 1:25 UTC
7 points
1 comment1 min readLW link

Stable Poin­t­ers to Value III: Re­cur­sive Quantilization

abramdemski21 Jul 2018 8:06 UTC
20 points
4 comments4 min readLW link

Con­cep­tual prob­lems with util­ity func­tions, sec­ond at­tempt at ex­plain­ing

Dacyn21 Jul 2018 2:08 UTC
16 points
5 comments2 min readLW link

Can few-shot learn­ing teach AI right from wrong?

Charlie Steiner20 Jul 2018 7:45 UTC
13 points
3 comments6 min readLW link

The Psy­chol­ogy Of Re­s­olute Agents

Chris_Leong20 Jul 2018 5:42 UTC
10 points
20 comments5 min readLW link

Prob­a­bil­ity is Real, and Value is Complex

abramdemski20 Jul 2018 5:24 UTC
80 points
21 comments6 min readLW link

Solv­ing the AI Race Finalists

Gordon Seidoh Worley19 Jul 2018 21:04 UTC
24 points
0 comments1 min readLW link
(medium.com)

“Ar­tifi­cial In­tel­li­gence” (new en­try at Stan­ford En­cy­clo­pe­dia of Philos­o­phy)

fortyeridania19 Jul 2018 9:48 UTC
5 points
8 comments1 min readLW link
(plato.stanford.edu)

Dis­cus­sion: Rais­ing the San­ity Waterline

Chriswaterguy19 Jul 2018 2:12 UTC
2 points
0 comments1 min readLW link

LW Up­date 2018-07-18 – Align­men­tFo­rum Bug Fixes

Raemon19 Jul 2018 2:10 UTC
13 points
0 comments1 min readLW link

Gen­er­al­ized Kelly betting

Linda Linsefors19 Jul 2018 1:38 UTC
15 points
5 comments2 min readLW link

Mechanism De­sign for AI

Tobias_Baumann18 Jul 2018 16:47 UTC
5 points
3 comments1 min readLW link
(s-risks.org)

A Step-by-step Guide to Find­ing a (Good!) Therapist

squidious18 Jul 2018 1:50 UTC
46 points
5 comments9 min readLW link
(opalsandbonobos.blogspot.com)

Sim­ple Me­taphor About Com­pressed Sensing

ryan_b17 Jul 2018 15:47 UTC
6 points
0 comments1 min readLW link

Figur­ing out what Alice wants, part II

Stuart_Armstrong17 Jul 2018 13:59 UTC
17 points
0 comments5 min readLW link

Figur­ing out what Alice wants, part I

Stuart_Armstrong17 Jul 2018 13:59 UTC
15 points
8 comments3 min readLW link

How To Use Bureaucracies

Samo Burja17 Jul 2018 8:10 UTC
63 points
37 comments9 min readLW link
(medium.com)

Septem­ber CFAR Workshop

CFAR Team17 Jul 2018 3:16 UTC
20 points
0 comments1 min readLW link

(AI al­ign­ment) Now is special

Andrew Quinn17 Jul 2018 1:50 UTC
2 points
0 comments1 min readLW link

Look Un­der the Light Post

Gordon Seidoh Worley16 Jul 2018 22:19 UTC
22 points
8 comments4 min readLW link

Align­ment Newslet­ter #15: 07/​16/​18

Rohin Shah16 Jul 2018 16:10 UTC
42 points
0 comments15 min readLW link
(mailchi.mp)

Com­pact vs. Wide Models

Vaniver16 Jul 2018 4:09 UTC
31 points
5 comments3 min readLW link

Prob­a­bil­is­tic de­ci­sion-mak­ing as an anx­iety-re­duc­tion technique

RationallyDense16 Jul 2018 3:51 UTC
8 points
4 comments1 min readLW link

Buri­dan’s ass in co­or­di­na­tion games

jessicata16 Jul 2018 2:51 UTC
52 points
26 comments10 min readLW link

Re­search Debt

Elizabeth15 Jul 2018 19:36 UTC
24 points
2 comments1 min readLW link
(distill.pub)

An op­ti­mistic ex­pla­na­tion of the out­rage epidemic

chaosmage15 Jul 2018 14:35 UTC
18 points
5 comments3 min readLW link

An­nounce­ment: AI al­ign­ment prize round 3 win­ners and next round

cousin_it15 Jul 2018 7:40 UTC
93 points
7 comments1 min readLW link

Meetup Cookbook

maia14 Jul 2018 22:26 UTC
74 points
7 comments1 min readLW link
(tigrennatenn.neocities.org)

Ex­pected Pain Parameters

Alicorn14 Jul 2018 19:30 UTC
87 points
12 comments2 min readLW link

Boltz­mann Brains and Within-model vs. Between-mod­els Probability

Charlie Steiner14 Jul 2018 9:52 UTC
15 points
12 comments3 min readLW link

[1607.08289] “Mam­malian Value Sys­tems” (as a start­ing point for hu­man value sys­tem model cre­ated by IRL agent)

avturchin14 Jul 2018 9:46 UTC
9 points
9 comments1 min readLW link
(arxiv.org)

Gen­er­at­ing vs Rec­og­niz­ing

lifelonglearner14 Jul 2018 5:10 UTC
15 points
3 comments4 min readLW link

LW Up­date 2018-7-14 – Styling Re­work, Com­mentsItem, Performance

Raemon14 Jul 2018 1:13 UTC
30 points
0 comments1 min readLW link

Se­condary Stres­sors and Tac­tile Ambition

lionhearted (Sebastian Marshall)13 Jul 2018 0:26 UTC
16 points
16 comments4 min readLW link

A Sarno-Han­son Synthesis

moridinamael12 Jul 2018 16:13 UTC
52 points
15 comments4 min readLW link

Prob­a­bil­ity is a model, fre­quency is an ob­ser­va­tion: Why both halfers and thirders are cor­rect in the Sleep­ing Beauty prob­lem.

Shmi12 Jul 2018 6:52 UTC
26 points
34 comments2 min readLW link

What does the stock mar­ket tell us about AI timelines?

Tobias_Baumann12 Jul 2018 6:05 UTC
6 points
5 comments1 min readLW link
(s-risks.org)

An Agent is a Wor­ldline in Teg­mark V

komponisto12 Jul 2018 5:12 UTC
24 points
12 comments2 min readLW link

Wash­ing­ton, D.C.: What If

RobinZ12 Jul 2018 4:30 UTC
9 points
0 comments1 min readLW link

Are pre-speci­fied util­ity func­tions about the real world pos­si­ble in prin­ci­ple?

mlogan11 Jul 2018 18:46 UTC
24 points
7 comments4 min readLW link

Me­la­tonin: Much More Than You Wanted To Know

Scott Alexander11 Jul 2018 17:40 UTC
120 points
16 comments15 min readLW link
(slatestarcodex.com)

Monk Tree­house: some prob­lems defin­ing simulation

dranorter11 Jul 2018 7:35 UTC
6 points
1 comment5 min readLW link

Math­e­mat­i­cal Mindset

komponisto11 Jul 2018 3:03 UTC
54 points
5 comments2 min readLW link

De­ci­sion-the­o­retic prob­lems and The­o­ries; An (In­com­plete) com­par­a­tive list

somervta11 Jul 2018 2:59 UTC
36 points
0 comments1 min readLW link
(docs.google.com)

Agents That Learn From Hu­man Be­hav­ior Can’t Learn Hu­man Values That Hu­mans Haven’t Learned Yet

steven046111 Jul 2018 2:59 UTC
28 points
11 comments1 min readLW link