[Question] What are the risks of hav­ing your genome pub­li­cly available?

Mati_RoyFeb 11, 2020, 9:54 PM
16 points
13 commentsLW link

De­mons in Im­perfect Search

johnswentworthFeb 11, 2020, 8:25 PM
110 points
21 comments3 min readLW link

[Question] Will COVID-19 sur­vivors suffer last­ing dis­abil­ity at a high rate?

jimrandomhFeb 11, 2020, 8:23 PM
134 points
11 comments1 min readLW link

The Re­la­tional Stance

RaemonFeb 11, 2020, 5:16 AM
48 points
11 comments8 min readLW link

In­tel­li­gence with­out causality

Donald HobsonFeb 11, 2020, 12:34 AM
9 points
0 comments2 min readLW link

South Bay Meetup

DavidFriedmanFeb 10, 2020, 10:36 PM
4 points
0 commentsLW link

Si­mu­la­tion of tech­nolog­i­cal progress (work in progress)

Daniel KokotajloFeb 10, 2020, 8:39 PM
21 points
9 comments5 min readLW link

[Question] Why do we re­fuse to take ac­tion claiming our im­pact would be too small?

hookdumpFeb 10, 2020, 7:33 PM
5 points
31 comments1 min readLW link

Gricean com­mu­ni­ca­tion and meta-preferences

Charlie SteinerFeb 10, 2020, 5:05 AM
24 points
0 comments3 min readLW link

At­tain­able Utility Land­scape: How The World Is Changed

TurnTroutFeb 10, 2020, 12:58 AM
52 points
7 comments6 min readLW link

A Sim­ple In­tro­duc­tion to Neu­ral Networks

Rafael HarthFeb 9, 2020, 10:02 PM
34 points
13 comments18 min readLW link

[Question] Did AI pi­o­neers not worry much about AI risks?

lisperatiFeb 9, 2020, 7:58 PM
42 points
9 comments1 min readLW link

[Question] Source of Karma

jmhFeb 9, 2020, 2:13 PM
4 points
14 comments1 min readLW link

State Space of X-Risk Trajectories

David_KristofferssonFeb 9, 2020, 1:56 PM
11 points
0 comments9 min readLW link

[Question] Does there ex­ist an AGI-level pa­ram­e­ter set­ting for mod­ern DRL ar­chi­tec­tures?

TurnTroutFeb 9, 2020, 5:09 AM
15 points
3 comments1 min readLW link

[Question] Who… (or what) de­signed this site and where did they come from?

thedayismineFeb 9, 2020, 4:04 AM
12 points
3 comments1 min readLW link

How to Frame Nega­tive Feed­back as For­ward-Fac­ing Guidance

LironFeb 9, 2020, 2:47 AM
46 points
7 comments3 min readLW link

Re­la­tion­ship Out­comes Are Not Par­tic­u­larly Sen­si­tive to Small Vari­a­tions in Ver­bal Ability

Zack_M_DavisFeb 9, 2020, 12:34 AM
14 points
2 comments1 min readLW link
(zackmdavis.net)

What can the prin­ci­pal-agent liter­a­ture tell us about AI risk?

apcFeb 8, 2020, 9:28 PM
104 points
29 comments16 min readLW link

A Cau­tion­ary Note on Un­lock­ing the Emo­tional Brain

eapacheFeb 8, 2020, 5:21 PM
55 points
20 comments2 min readLW link

[Question] What is this re­view fea­ture?

Long tryFeb 8, 2020, 3:30 PM
1 point
1 comment1 min readLW link

Hal­i­fax SSC Meetup—FEB 8

intersticeFeb 8, 2020, 12:45 AM
4 points
0 comments1 min readLW link

On the falsifi­a­bil­ity of hypercomputation

jessicataFeb 7, 2020, 8:16 AM
24 points
4 comments4 min readLW link
(unstableontology.com)

More write­ups!

jefftkFeb 7, 2020, 3:10 AM
40 points
5 comments1 min readLW link
(www.jefftk.com)

Book Re­view: De­ci­sive by Chip and Dan Heath

Ian David MossFeb 6, 2020, 8:15 PM
4 points
0 comments2 min readLW link
(medium.com)

Bayes-Up: An App for Shar­ing Bayesian-MCQ

Louis FauconFeb 6, 2020, 7:01 PM
53 points
9 comments1 min readLW link

Mazes Se­quence Roundup: Fi­nal Thoughts and Paths Forward

ZviFeb 6, 2020, 4:10 PM
88 points
28 comments14 min readLW link1 review
(thezvi.wordpress.com)

Plau­si­bly, al­most ev­ery pow­er­ful al­gorithm would be manipulative

Stuart_ArmstrongFeb 6, 2020, 11:50 AM
38 points
25 comments3 min readLW link

Some quick notes on hand hygiene

willbradshawFeb 6, 2020, 2:47 AM
68 points
52 comments3 min readLW link

Po­ten­tial Re­search Topic: Vingean Reflec­tion, Value Align­ment and Aspiration

Vaughn PapenhausenFeb 6, 2020, 1:09 AM
15 points
4 comments4 min readLW link

Syn­the­siz­ing am­plifi­ca­tion and debate

evhubFeb 5, 2020, 10:53 PM
33 points
10 comments4 min readLW link

Wri­teup: Progress on AI Safety via Debate

Feb 5, 2020, 9:04 PM
103 points
18 comments33 min readLW link

[AN #85]: The nor­ma­tive ques­tions we should be ask­ing for AI al­ign­ment, and a sur­pris­ingly good chatbot

Rohin ShahFeb 5, 2020, 6:20 PM
14 points
2 comments7 min readLW link
(mailchi.mp)

The Ad­ven­ture: a new Utopia story

Stuart_ArmstrongFeb 5, 2020, 4:50 PM
101 points
37 comments51 min readLW link

“But that’s your job”: why or­gani­sa­tions can work

Stuart_ArmstrongFeb 5, 2020, 12:25 PM
77 points
12 comments4 min readLW link

Train­ing a tiny SupAmp model on easy tasks. The in­fluence of failure rate on learn­ing curves

rmoehnFeb 5, 2020, 7:22 AM
5 points
0 comments1 min readLW link

Phys­i­cal al­ign­ment—do you have it? Take a minute & check.

leggiFeb 5, 2020, 4:02 AM
4 points
4 comments1 min readLW link

Open & Wel­come Thread—Fe­bru­ary 2020

ryan_bFeb 4, 2020, 8:49 PM
17 points
114 comments1 min readLW link

Meta-Prefer­ence Utilitarianism

B JacobsFeb 4, 2020, 8:24 PM
10 points
30 comments1 min readLW link

Philo­soph­i­cal self-ratification

jessicataFeb 3, 2020, 10:48 PM
23 points
13 comments5 min readLW link
(unstableontology.com)

Twenty-three AI al­ign­ment re­search pro­ject definitions

rmoehnFeb 3, 2020, 10:21 PM
23 points
0 comments6 min readLW link

Ab­sent co­or­di­na­tion, fu­ture tech­nol­ogy will cause hu­man extinction

Jeffrey LadishFeb 3, 2020, 9:52 PM
21 points
12 comments5 min readLW link

Long Now, and Cul­ture vs Artifacts

RaemonFeb 3, 2020, 9:49 PM
26 points
3 comments6 min readLW link

[Question] Look­ing for books about soft­ware en­g­ineer­ing as a field

mingyuanFeb 3, 2020, 9:49 PM
14 points
15 comments1 min readLW link

Cat­e­gory The­ory Without The Baggage

johnswentworthFeb 3, 2020, 8:03 PM
139 points
51 comments13 min readLW link

Pro­tect­ing Large Pro­jects Against Mazedom

ZviFeb 3, 2020, 5:10 PM
78 points
11 comments4 min readLW link1 review
(thezvi.wordpress.com)

Pes­simism About Un­known Un­knowns In­spires Conservatism

michaelcohenFeb 3, 2020, 2:48 PM
41 points
2 comments5 min readLW link

Map Of Effec­tive Altruism

Scott AlexanderFeb 3, 2020, 6:20 AM
17 points
1 comment1 min readLW link
(slatestarcodex.com)

UML IX: Ker­nels and Boosting

Rafael HarthFeb 2, 2020, 9:51 PM
13 points
1 comment10 min readLW link

A point of clar­ifi­ca­tion on in­fo­haz­ard terminology

eukaryoteFeb 2, 2020, 5:43 PM
52 points
21 comments2 min readLW link
(eukaryotewritesblog.com)