Re­search Debt

ElizabethJul 15, 2018, 7:36 PM
25 points
2 commentsLW link
(distill.pub)

An op­ti­mistic ex­pla­na­tion of the out­rage epidemic

chaosmageJul 15, 2018, 2:35 PM
18 points
5 comments3 min readLW link

An­nounce­ment: AI al­ign­ment prize round 3 win­ners and next round

cousin_itJul 15, 2018, 7:40 AM
93 points
7 comments1 min readLW link

Meetup Cookbook

maiaJul 14, 2018, 10:26 PM
74 points
7 comments1 min readLW link
(tigrennatenn.neocities.org)

Ex­pected Pain Parameters

AlicornJul 14, 2018, 7:30 PM
87 points
12 comments2 min readLW link

Boltz­mann Brains and Within-model vs. Between-mod­els Probability

Charlie SteinerJul 14, 2018, 9:52 AM
15 points
12 comments3 min readLW link

[1607.08289] “Mam­malian Value Sys­tems” (as a start­ing point for hu­man value sys­tem model cre­ated by IRL agent)

avturchinJul 14, 2018, 9:46 AM
9 points
9 commentsLW link
(arxiv.org)

Gen­er­at­ing vs Rec­og­niz­ing

lifelonglearnerJul 14, 2018, 5:10 AM
15 points
3 comments4 min readLW link

LW Up­date 2018-7-14 – Styling Re­work, Com­mentsItem, Performance

RaemonJul 14, 2018, 1:13 AM
30 points
0 comments1 min readLW link

Se­condary Stres­sors and Tac­tile Ambition

lionhearted (Sebastian Marshall)Jul 13, 2018, 12:26 AM
16 points
16 comments4 min readLW link

A Sarno-Han­son Synthesis

moridinamaelJul 12, 2018, 4:13 PM
52 points
15 comments4 min readLW link

Prob­a­bil­ity is a model, fre­quency is an ob­ser­va­tion: Why both halfers and thirders are cor­rect in the Sleep­ing Beauty prob­lem.

ShmiJul 12, 2018, 6:52 AM
26 points
34 comments2 min readLW link

What does the stock mar­ket tell us about AI timelines?

Tobias_BaumannJul 12, 2018, 6:05 AM
6 points
5 commentsLW link
(s-risks.org)

An Agent is a Wor­ldline in Teg­mark V

komponistoJul 12, 2018, 5:12 AM
24 points
12 comments2 min readLW link

Wash­ing­ton, D.C.: What If

RobinZJul 12, 2018, 4:30 AM
9 points
0 comments1 min readLW link

Are pre-speci­fied util­ity func­tions about the real world pos­si­ble in prin­ci­ple?

mloganJul 11, 2018, 6:46 PM
24 points
7 comments4 min readLW link

Me­la­tonin: Much More Than You Wanted To Know

Scott AlexanderJul 11, 2018, 5:40 PM
122 points
16 comments15 min readLW link
(slatestarcodex.com)

Monk Tree­house: some prob­lems defin­ing simulation

dranorterJul 11, 2018, 7:35 AM
6 points
1 comment5 min readLW link

Math­e­mat­i­cal Mindset

komponistoJul 11, 2018, 3:03 AM
54 points
5 comments2 min readLW link

De­ci­sion-the­o­retic prob­lems and The­o­ries; An (In­com­plete) com­par­a­tive list

somervtaJul 11, 2018, 2:59 AM
36 points
0 comments1 min readLW link
(docs.google.com)

Agents That Learn From Hu­man Be­hav­ior Can’t Learn Hu­man Values That Hu­mans Haven’t Learned Yet

steven0461Jul 11, 2018, 2:59 AM
28 points
11 comments1 min readLW link

On the Role of Coun­ter­fac­tu­als in Learning

Max KanwalJul 11, 2018, 2:45 AM
11 points
2 comments3 min readLW link

Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

Vlad MikulikJul 11, 2018, 2:35 AM
20 points
16 comments6 min readLW link

Com­plete Class: Con­se­quen­tial­ist Foundations

abramdemskiJul 11, 2018, 1:57 AM
58 points
37 comments13 min readLW link

Con­di­tions un­der which mis­al­igned sub­agents can (not) arise in classifiers

anon1Jul 11, 2018, 1:52 AM
12 points
2 comments2 min readLW link

No, I won’t go there, it feels like you’re try­ing to Pas­cal-mug me

RupertJul 11, 2018, 1:37 AM
9 points
0 comments2 min readLW link

Con­cep­tual prob­lems with util­ity functions

DacynJul 11, 2018, 1:29 AM
22 points
12 comments2 min readLW link

Depen­dent Type The­ory and Zero-Shot Reasoning

evhubJul 11, 2018, 1:16 AM
27 points
3 comments5 min readLW link

A com­ment on the IDA-AlphaGoZero metaphor; ca­pa­bil­ities ver­sus alignment

AlexMennenJul 11, 2018, 1:03 AM
40 points
1 comment1 min readLW link

Bound­ing Good­hart’s Law

eric_langloisJul 11, 2018, 12:46 AM
43 points
2 comments5 min readLW link

Mechanis­tic Trans­parency for Ma­chine Learning

DanielFilanJul 11, 2018, 12:34 AM
55 points
9 comments4 min readLW link

An en­vi­ron­ment for study­ing counterfactuals

NisanJul 11, 2018, 12:14 AM
15 points
6 comments3 min readLW link

A uni­ver­sal score for optimizers

levinJul 10, 2018, 11:52 PM
15 points
8 comments3 min readLW link

Bayesian Prob­a­bil­ity is for things that are Space-like Separated from You

Scott GarrabrantJul 10, 2018, 11:47 PM
86 points
22 comments2 min readLW link

Align­ment prob­lems for economists

Chris van MerwijkJul 10, 2018, 11:43 PM
5 points
2 comments2 min readLW link

Non-re­solve as Resolve

Linda LinseforsJul 10, 2018, 11:31 PM
15 points
1 comment2 min readLW link

A frame­work for think­ing about wireheading

theotherotheralexJul 10, 2018, 11:14 PM
15 points
4 comments1 min readLW link

Log­i­cal Uncer­tainty and Func­tional De­ci­sion Theory

swordsintoploughsharesJul 10, 2018, 11:08 PM
15 points
4 comments2 min readLW link

Re­peated (and im­proved) Sleep­ing Beauty problem

Linda LinseforsJul 10, 2018, 10:32 PM
12 points
5 comments2 min readLW link

Prob­a­bil­ity is fake, fre­quency is real

Linda LinseforsJul 10, 2018, 10:32 PM
12 points
7 comments1 min readLW link

Con­di­tion­ing, Coun­ter­fac­tu­als, Ex­plo­ra­tion, and Gears

DiffractorJul 10, 2018, 10:11 PM
28 points
1 comment5 min readLW link

Two agents can have the same source code and op­ti­mise differ­ent util­ity functions

Joar SkalseJul 10, 2018, 9:51 PM
11 points
11 comments1 min readLW link

The In­ten­tional Agency Experiment

Alexander Gietelink OldenzielJul 10, 2018, 8:32 PM
13 points
5 comments3 min readLW link

An­nounc­ing Align­men­tFo­rum.org Beta

RaemonJul 10, 2018, 8:19 PM
68 points
35 comments2 min readLW link

Choos­ing to Choose?

Daniel HerrmannJul 10, 2018, 8:15 PM
10 points
7 comments5 min readLW link

Study on what makes peo­ple ap­prove or con­demn mind up­load tech­nol­ogy; refer­ences LW

Kaj_SotalaJul 10, 2018, 5:14 PM
22 points
0 comments2 min readLW link
(www.nature.com)

How to par­ent more predictably

jefftkJul 10, 2018, 3:18 PM
78 points
1 comment4 min readLW link

Open Thread July 2018

nullJul 10, 2018, 2:51 PM
10 points
9 comments1 min readLW link

Three an­chor­ings: num­ber, at­ti­tude, and taste

Stuart_ArmstrongJul 10, 2018, 2:21 PM
14 points
4 comments2 min readLW link

The Dilemma of Worse Than Death Scenarios

arkaeikJul 10, 2018, 9:18 AM
14 points
18 comments4 min readLW link