Can few-shot learn­ing teach AI right from wrong?

Charlie SteinerJul 20, 2018, 7:45 AM
13 points
3 comments6 min readLW link

The Psy­chol­ogy Of Re­s­olute Agents

Chris_LeongJul 20, 2018, 5:42 AM
10 points
20 comments5 min readLW link

Prob­a­bil­ity is Real, and Value is Complex

abramdemskiJul 20, 2018, 5:24 AM
80 points
21 comments6 min readLW link

Solv­ing the AI Race Finalists

Gordon Seidoh WorleyJul 19, 2018, 9:04 PM
24 points
0 comments1 min readLW link
(medium.com)

“Ar­tifi­cial In­tel­li­gence” (new en­try at Stan­ford En­cy­clo­pe­dia of Philos­o­phy)

fortyeridaniaJul 19, 2018, 9:48 AM
5 points
8 commentsLW link
(plato.stanford.edu)

Dis­cus­sion: Rais­ing the San­ity Waterline

ChriswaterguyJul 19, 2018, 2:12 AM
2 points
0 comments1 min readLW link

LW Up­date 2018-07-18 – Align­men­tFo­rum Bug Fixes

RaemonJul 19, 2018, 2:10 AM
13 points
0 comments1 min readLW link

Gen­er­al­ized Kelly betting

Linda LinseforsJul 19, 2018, 1:38 AM
15 points
5 comments2 min readLW link

Mechanism De­sign for AI

Tobias_BaumannJul 18, 2018, 4:47 PM
5 points
3 commentsLW link
(s-risks.org)

A Step-by-step Guide to Find­ing a (Good!) Therapist

squidiousJul 18, 2018, 1:50 AM
46 points
5 comments9 min readLW link
(opalsandbonobos.blogspot.com)

Sim­ple Me­taphor About Com­pressed Sensing

ryan_bJul 17, 2018, 3:47 PM
6 points
0 comments1 min readLW link

Figur­ing out what Alice wants, part II

Stuart_ArmstrongJul 17, 2018, 1:59 PM
17 points
0 comments5 min readLW link

Figur­ing out what Alice wants, part I

Stuart_ArmstrongJul 17, 2018, 1:59 PM
15 points
8 comments3 min readLW link

How To Use Bureaucracies

Samo BurjaJul 17, 2018, 8:10 AM
64 points
37 comments9 min readLW link
(medium.com)

Septem­ber CFAR Workshop

CFAR TeamJul 17, 2018, 3:16 AM
20 points
0 comments1 min readLW link

(AI al­ign­ment) Now is special

Andrew QuinnJul 17, 2018, 1:50 AM
2 points
0 comments1 min readLW link

Look Un­der the Light Post

Gordon Seidoh WorleyJul 16, 2018, 10:19 PM
22 points
8 comments4 min readLW link

Align­ment Newslet­ter #15: 07/​16/​18

Rohin ShahJul 16, 2018, 4:10 PM
42 points
0 comments15 min readLW link
(mailchi.mp)

Com­pact vs. Wide Models

VaniverJul 16, 2018, 4:09 AM
31 points
5 comments3 min readLW link

Prob­a­bil­is­tic de­ci­sion-mak­ing as an anx­iety-re­duc­tion technique

RationallyDenseJul 16, 2018, 3:51 AM
8 points
4 comments1 min readLW link

Buri­dan’s ass in co­or­di­na­tion games

jessicataJul 16, 2018, 2:51 AM
52 points
26 comments10 min readLW link

Re­search Debt

ElizabethJul 15, 2018, 7:36 PM
25 points
2 commentsLW link
(distill.pub)

An op­ti­mistic ex­pla­na­tion of the out­rage epidemic

chaosmageJul 15, 2018, 2:35 PM
18 points
5 comments3 min readLW link

An­nounce­ment: AI al­ign­ment prize round 3 win­ners and next round

cousin_itJul 15, 2018, 7:40 AM
93 points
7 comments1 min readLW link

Meetup Cookbook

maiaJul 14, 2018, 10:26 PM
74 points
7 comments1 min readLW link
(tigrennatenn.neocities.org)

Ex­pected Pain Parameters

AlicornJul 14, 2018, 7:30 PM
87 points
12 comments2 min readLW link

Boltz­mann Brains and Within-model vs. Between-mod­els Probability

Charlie SteinerJul 14, 2018, 9:52 AM
15 points
12 comments3 min readLW link

[1607.08289] “Mam­malian Value Sys­tems” (as a start­ing point for hu­man value sys­tem model cre­ated by IRL agent)

avturchinJul 14, 2018, 9:46 AM
9 points
9 commentsLW link
(arxiv.org)

Gen­er­at­ing vs Rec­og­niz­ing

lifelonglearnerJul 14, 2018, 5:10 AM
15 points
3 comments4 min readLW link

LW Up­date 2018-7-14 – Styling Re­work, Com­mentsItem, Performance

RaemonJul 14, 2018, 1:13 AM
30 points
0 comments1 min readLW link

Se­condary Stres­sors and Tac­tile Ambition

lionhearted (Sebastian Marshall)Jul 13, 2018, 12:26 AM
16 points
16 comments4 min readLW link

A Sarno-Han­son Synthesis

moridinamaelJul 12, 2018, 4:13 PM
52 points
15 comments4 min readLW link

Prob­a­bil­ity is a model, fre­quency is an ob­ser­va­tion: Why both halfers and thirders are cor­rect in the Sleep­ing Beauty prob­lem.

ShmiJul 12, 2018, 6:52 AM
26 points
34 comments2 min readLW link

What does the stock mar­ket tell us about AI timelines?

Tobias_BaumannJul 12, 2018, 6:05 AM
6 points
5 commentsLW link
(s-risks.org)

An Agent is a Wor­ldline in Teg­mark V

komponistoJul 12, 2018, 5:12 AM
24 points
12 comments2 min readLW link

Wash­ing­ton, D.C.: What If

RobinZJul 12, 2018, 4:30 AM
9 points
0 comments1 min readLW link

Are pre-speci­fied util­ity func­tions about the real world pos­si­ble in prin­ci­ple?

mloganJul 11, 2018, 6:46 PM
24 points
7 comments4 min readLW link

Me­la­tonin: Much More Than You Wanted To Know

Scott AlexanderJul 11, 2018, 5:40 PM
122 points
16 comments15 min readLW link
(slatestarcodex.com)

Monk Tree­house: some prob­lems defin­ing simulation

dranorterJul 11, 2018, 7:35 AM
6 points
1 comment5 min readLW link

Math­e­mat­i­cal Mindset

komponistoJul 11, 2018, 3:03 AM
54 points
5 comments2 min readLW link

De­ci­sion-the­o­retic prob­lems and The­o­ries; An (In­com­plete) com­par­a­tive list

somervtaJul 11, 2018, 2:59 AM
36 points
0 comments1 min readLW link
(docs.google.com)

Agents That Learn From Hu­man Be­hav­ior Can’t Learn Hu­man Values That Hu­mans Haven’t Learned Yet

steven0461Jul 11, 2018, 2:59 AM
28 points
11 comments1 min readLW link

On the Role of Coun­ter­fac­tu­als in Learning

Max KanwalJul 11, 2018, 2:45 AM
11 points
2 comments3 min readLW link

Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

Vlad MikulikJul 11, 2018, 2:35 AM
20 points
16 comments6 min readLW link

Com­plete Class: Con­se­quen­tial­ist Foundations

abramdemskiJul 11, 2018, 1:57 AM
58 points
37 comments13 min readLW link

Con­di­tions un­der which mis­al­igned sub­agents can (not) arise in classifiers

anon1Jul 11, 2018, 1:52 AM
12 points
2 comments2 min readLW link

No, I won’t go there, it feels like you’re try­ing to Pas­cal-mug me

RupertJul 11, 2018, 1:37 AM
9 points
0 comments2 min readLW link

Con­cep­tual prob­lems with util­ity functions

DacynJul 11, 2018, 1:29 AM
22 points
12 comments2 min readLW link

Depen­dent Type The­ory and Zero-Shot Reasoning

evhubJul 11, 2018, 1:16 AM
27 points
3 comments5 min readLW link

A com­ment on the IDA-AlphaGoZero metaphor; ca­pa­bil­ities ver­sus alignment

AlexMennenJul 11, 2018, 1:03 AM
40 points
1 comment1 min readLW link