UML XI: Near­est Neigh­bor Schemes

Rafael HarthFeb 16, 2020, 8:30 PM
15 points
3 comments9 min readLW link

[Link and com­men­tary] The Offense-Defense Balance of Scien­tific Knowl­edge: Does Pub­lish­ing AI Re­search Re­duce Mi­suse?

MichaelAFeb 16, 2020, 7:56 PM
24 points
4 comments3 min readLW link

Train­ing Regime Day 2: Search­ing for bugs

Mark XuFeb 16, 2020, 5:16 PM
31 points
2 comments3 min readLW link

Tak­ing the Out­group Seriously

Davis_KingsleyFeb 16, 2020, 1:23 PM
21 points
8 comments2 min readLW link

On char­ac­ter­iz­ing heavy-tailedness

JsevillamolFeb 16, 2020, 12:14 AM
38 points
6 comments4 min readLW link

Train­ing Regime Day 1: What is ap­plied ra­tio­nal­ity?

Mark XuFeb 15, 2020, 9:03 PM
33 points
7 comments4 min readLW link

[Question] It “wanted” …

jmhFeb 15, 2020, 8:52 PM
4 points
7 comments1 min readLW link

Why Science is slow­ing down, Univer­si­ties and Maslow’s hi­er­ar­chy of needs

George3d6Feb 15, 2020, 8:39 PM
16 points
25 comments10 min readLW link

Ex­er­cises in Com­pre­hen­sive In­for­ma­tion Gathering

johnswentworthFeb 15, 2020, 5:27 PM
141 points
18 comments3 min readLW link1 review

Refer­ence Post: Triv­ial De­ci­sion The­ory Problem

Chris_LeongFeb 15, 2020, 5:13 PM
16 points
4 comments2 min readLW link

[Question] What is the differ­ence be­tween ro­bust­ness and in­ner al­ign­ment?

JanBFeb 15, 2020, 1:28 PM
9 points
2 comments1 min readLW link

[Question] Does iter­ated am­plifi­ca­tion tackle the in­ner al­ign­ment prob­lem?

JanBFeb 15, 2020, 12:58 PM
7 points
4 comments1 min readLW link

Bayesian Evolv­ing-to-Extinction

abramdemskiFeb 14, 2020, 11:55 PM
40 points
13 comments5 min readLW link

[Question] A ‘Prac­tice of Ra­tion­al­ity’ Se­quence?

abramdemskiFeb 14, 2020, 10:56 PM
78 points
25 comments3 min readLW link

The Catas­trophic Con­ver­gence Conjecture

TurnTroutFeb 14, 2020, 9:16 PM
45 points
16 comments8 min readLW link

The Rea­son­able Effec­tive­ness of Math­e­mat­ics or: AI vs sandwiches

Vanessa KosoyFeb 14, 2020, 6:46 PM
34 points
8 comments9 min readLW link1 review

Per­cep­trons Explained

lifelonglearnerFeb 14, 2020, 5:34 PM
13 points
2 comments1 min readLW link
(owenshen24.github.io)

Please Help Me­tac­u­lus Fore­cast COVID-19

AABoylesFeb 14, 2020, 5:31 PM
34 points
0 comments1 min readLW link
(www.metaculus.com)

Train­ing Regime Day 0: Introduction

Mark XuFeb 14, 2020, 8:22 AM
41 points
4 comments2 min readLW link

Dist­in­guish­ing defi­ni­tions of takeoff

Matthew BarnettFeb 14, 2020, 12:16 AM
79 points
6 comments6 min readLW link

Effec­tive Altru­ism 80,000 hours work­shop ma­te­ri­als & out­line (and Feb 10 ’19 KC meetup notes)

samstowersFeb 13, 2020, 9:48 PM
5 points
0 comments2 min readLW link

[Question] How do you use face masks?

ChristianKlFeb 13, 2020, 2:18 PM
12 points
1 comment1 min readLW link

In the­ory: does build­ing the sub­agent have an “im­pact”?

Stuart_ArmstrongFeb 13, 2020, 2:17 PM
17 points
4 comments4 min readLW link

[Question] What frac­tion of work time in the world is done at a com­puter?

Mati_RoyFeb 13, 2020, 9:53 AM
9 points
0 comments1 min readLW link

A Var­i­ance In­differ­ent Max­i­mizer Alternative

Nevan WichersFeb 13, 2020, 9:06 AM
7 points
1 comment4 min readLW link

Con­fir­ma­tion Bias As Mis­fire Of Nor­mal Bayesian Reasoning

Scott AlexanderFeb 13, 2020, 7:20 AM
43 points
9 comments2 min readLW link
(slatestarcodex.com)

Build­ing and us­ing the subagent

Stuart_ArmstrongFeb 12, 2020, 7:28 PM
17 points
3 comments2 min readLW link

[AN #86]: Im­prov­ing de­bate and fac­tored cog­ni­tion through hu­man experiments

Rohin ShahFeb 12, 2020, 6:10 PM
15 points
0 comments9 min readLW link
(mailchi.mp)

Sus­pi­ciously bal­anced evidence

gjmFeb 12, 2020, 5:04 PM
50 points
24 comments4 min readLW link

[Question] What are the risks of hav­ing your genome pub­li­cly available?

Mati_RoyFeb 11, 2020, 9:54 PM
16 points
13 commentsLW link

De­mons in Im­perfect Search

johnswentworthFeb 11, 2020, 8:25 PM
110 points
21 comments3 min readLW link

[Question] Will COVID-19 sur­vivors suffer last­ing dis­abil­ity at a high rate?

jimrandomhFeb 11, 2020, 8:23 PM
134 points
11 comments1 min readLW link

The Re­la­tional Stance

RaemonFeb 11, 2020, 5:16 AM
48 points
11 comments8 min readLW link

In­tel­li­gence with­out causality

Donald HobsonFeb 11, 2020, 12:34 AM
9 points
0 comments2 min readLW link

South Bay Meetup

DavidFriedmanFeb 10, 2020, 10:36 PM
4 points
0 commentsLW link

Si­mu­la­tion of tech­nolog­i­cal progress (work in progress)

Daniel KokotajloFeb 10, 2020, 8:39 PM
21 points
9 comments5 min readLW link

[Question] Why do we re­fuse to take ac­tion claiming our im­pact would be too small?

hookdumpFeb 10, 2020, 7:33 PM
5 points
31 comments1 min readLW link

Gricean com­mu­ni­ca­tion and meta-preferences

Charlie SteinerFeb 10, 2020, 5:05 AM
24 points
0 comments3 min readLW link

At­tain­able Utility Land­scape: How The World Is Changed

TurnTroutFeb 10, 2020, 12:58 AM
52 points
7 comments6 min readLW link

A Sim­ple In­tro­duc­tion to Neu­ral Networks

Rafael HarthFeb 9, 2020, 10:02 PM
34 points
13 comments18 min readLW link

[Question] Did AI pi­o­neers not worry much about AI risks?

lisperatiFeb 9, 2020, 7:58 PM
42 points
9 comments1 min readLW link

[Question] Source of Karma

jmhFeb 9, 2020, 2:13 PM
4 points
14 comments1 min readLW link

State Space of X-Risk Trajectories

David_KristofferssonFeb 9, 2020, 1:56 PM
11 points
0 comments9 min readLW link

[Question] Does there ex­ist an AGI-level pa­ram­e­ter set­ting for mod­ern DRL ar­chi­tec­tures?

TurnTroutFeb 9, 2020, 5:09 AM
15 points
3 comments1 min readLW link

[Question] Who… (or what) de­signed this site and where did they come from?

thedayismineFeb 9, 2020, 4:04 AM
12 points
3 comments1 min readLW link

How to Frame Nega­tive Feed­back as For­ward-Fac­ing Guidance

LironFeb 9, 2020, 2:47 AM
46 points
7 comments3 min readLW link

Re­la­tion­ship Out­comes Are Not Par­tic­u­larly Sen­si­tive to Small Vari­a­tions in Ver­bal Ability

Zack_M_DavisFeb 9, 2020, 12:34 AM
14 points
2 comments1 min readLW link
(zackmdavis.net)

What can the prin­ci­pal-agent liter­a­ture tell us about AI risk?

apcFeb 8, 2020, 9:28 PM
104 points
29 comments16 min readLW link

A Cau­tion­ary Note on Un­lock­ing the Emo­tional Brain

eapacheFeb 8, 2020, 5:21 PM
55 points
20 comments2 min readLW link

[Question] What is this re­view fea­ture?

Long tryFeb 8, 2020, 3:30 PM
1 point
1 comment1 min readLW link