Effec­tive Altru­ism 80,000 hours work­shop ma­te­ri­als & out­line (and Feb 10 ’19 KC meetup notes)

samstowers13 Feb 2020 21:48 UTC
5 points
0 comments2 min readLW link

[Question] How do you use face masks?

ChristianKl13 Feb 2020 14:18 UTC
12 points
1 comment1 min readLW link

In the­ory: does build­ing the sub­agent have an “im­pact”?

Stuart_Armstrong13 Feb 2020 14:17 UTC
17 points
4 comments4 min readLW link

[Question] What frac­tion of work time in the world is done at a com­puter?

Mati_Roy13 Feb 2020 9:53 UTC
9 points
0 comments1 min readLW link

A Var­i­ance In­differ­ent Max­i­mizer Alternative

Nevan Wichers13 Feb 2020 9:06 UTC
7 points
1 comment4 min readLW link

Con­fir­ma­tion Bias As Mis­fire Of Nor­mal Bayesian Reasoning

Scott Alexander13 Feb 2020 7:20 UTC
43 points
9 comments2 min readLW link
(slatestarcodex.com)

Build­ing and us­ing the subagent

Stuart_Armstrong12 Feb 2020 19:28 UTC
17 points
3 comments2 min readLW link

[AN #86]: Im­prov­ing de­bate and fac­tored cog­ni­tion through hu­man experiments

Rohin Shah12 Feb 2020 18:10 UTC
15 points
0 comments9 min readLW link
(mailchi.mp)

Sus­pi­ciously bal­anced evidence

gjm12 Feb 2020 17:04 UTC
50 points
24 comments4 min readLW link

[Question] What are the risks of hav­ing your genome pub­li­cly available?

Mati_Roy11 Feb 2020 21:54 UTC
16 points
13 comments1 min readLW link

De­mons in Im­perfect Search

johnswentworth11 Feb 2020 20:25 UTC
107 points
21 comments3 min readLW link

[Question] Will COVID-19 sur­vivors suffer last­ing dis­abil­ity at a high rate?

jimrandomh11 Feb 2020 20:23 UTC
134 points
11 comments1 min readLW link

The Re­la­tional Stance

Raemon11 Feb 2020 5:16 UTC
47 points
11 comments8 min readLW link

In­tel­li­gence with­out causality

Donald Hobson11 Feb 2020 0:34 UTC
9 points
0 comments2 min readLW link

South Bay Meetup

DavidFriedman10 Feb 2020 22:36 UTC
4 points
0 comments1 min readLW link

Si­mu­la­tion of tech­nolog­i­cal progress (work in progress)

Daniel Kokotajlo10 Feb 2020 20:39 UTC
21 points
9 comments5 min readLW link

[Question] Why do we re­fuse to take ac­tion claiming our im­pact would be too small?

hookdump10 Feb 2020 19:33 UTC
5 points
31 comments1 min readLW link

Gricean com­mu­ni­ca­tion and meta-preferences

Charlie Steiner10 Feb 2020 5:05 UTC
24 points
0 comments3 min readLW link

At­tain­able Utility Land­scape: How The World Is Changed

TurnTrout10 Feb 2020 0:58 UTC
52 points
7 comments6 min readLW link

A Sim­ple In­tro­duc­tion to Neu­ral Networks

Rafael Harth9 Feb 2020 22:02 UTC
34 points
13 comments18 min readLW link

[Question] Did AI pi­o­neers not worry much about AI risks?

lisperati9 Feb 2020 19:58 UTC
42 points
9 comments1 min readLW link

[Question] Source of Karma

jmh9 Feb 2020 14:13 UTC
4 points
14 comments1 min readLW link

State Space of X-Risk Trajectories

David_Kristoffersson9 Feb 2020 13:56 UTC
11 points
0 comments9 min readLW link

[Question] Does there ex­ist an AGI-level pa­ram­e­ter set­ting for mod­ern DRL ar­chi­tec­tures?

TurnTrout9 Feb 2020 5:09 UTC
15 points
3 comments1 min readLW link

[Question] Who… (or what) de­signed this site and where did they come from?

thedayismine9 Feb 2020 4:04 UTC
12 points
3 comments1 min readLW link

How to Frame Nega­tive Feed­back as For­ward-Fac­ing Guidance

Liron9 Feb 2020 2:47 UTC
46 points
7 comments3 min readLW link

Re­la­tion­ship Out­comes Are Not Par­tic­u­larly Sen­si­tive to Small Vari­a­tions in Ver­bal Ability

Zack_M_Davis9 Feb 2020 0:34 UTC
14 points
2 comments1 min readLW link
(zackmdavis.net)

What can the prin­ci­pal-agent liter­a­ture tell us about AI risk?

apc8 Feb 2020 21:28 UTC
104 points
29 comments16 min readLW link

A Cau­tion­ary Note on Un­lock­ing the Emo­tional Brain

eapache8 Feb 2020 17:21 UTC
54 points
20 comments2 min readLW link

[Question] What is this re­view fea­ture?

Long try8 Feb 2020 15:30 UTC
1 point
1 comment1 min readLW link

Hal­i­fax SSC Meetup—FEB 8

interstice8 Feb 2020 0:45 UTC
4 points
0 comments1 min readLW link

On the falsifi­a­bil­ity of hypercomputation

jessicata7 Feb 2020 8:16 UTC
24 points
4 comments4 min readLW link
(unstableontology.com)

More write­ups!

jefftk7 Feb 2020 3:10 UTC
40 points
5 comments1 min readLW link
(www.jefftk.com)

Book Re­view: De­ci­sive by Chip and Dan Heath

Ian David Moss6 Feb 2020 20:15 UTC
4 points
0 comments2 min readLW link
(medium.com)

Bayes-Up: An App for Shar­ing Bayesian-MCQ

Louis Faucon6 Feb 2020 19:01 UTC
53 points
9 comments1 min readLW link

Mazes Se­quence Roundup: Fi­nal Thoughts and Paths Forward

Zvi6 Feb 2020 16:10 UTC
88 points
28 comments14 min readLW link1 review
(thezvi.wordpress.com)

Plau­si­bly, al­most ev­ery pow­er­ful al­gorithm would be manipulative

Stuart_Armstrong6 Feb 2020 11:50 UTC
38 points
25 comments3 min readLW link

Some quick notes on hand hygiene

willbradshaw6 Feb 2020 2:47 UTC
68 points
52 comments3 min readLW link

Po­ten­tial Re­search Topic: Vingean Reflec­tion, Value Align­ment and Aspiration

Vaughn Papenhausen6 Feb 2020 1:09 UTC
15 points
4 comments4 min readLW link

Syn­the­siz­ing am­plifi­ca­tion and debate

evhub5 Feb 2020 22:53 UTC
33 points
10 comments4 min readLW link

Wri­teup: Progress on AI Safety via Debate

5 Feb 2020 21:04 UTC
102 points
18 comments33 min readLW link

[AN #85]: The nor­ma­tive ques­tions we should be ask­ing for AI al­ign­ment, and a sur­pris­ingly good chatbot

Rohin Shah5 Feb 2020 18:20 UTC
14 points
2 comments7 min readLW link
(mailchi.mp)

The Ad­ven­ture: a new Utopia story

Stuart_Armstrong5 Feb 2020 16:50 UTC
100 points
37 comments51 min readLW link

“But that’s your job”: why or­gani­sa­tions can work

Stuart_Armstrong5 Feb 2020 12:25 UTC
77 points
12 comments4 min readLW link

Train­ing a tiny SupAmp model on easy tasks. The in­fluence of failure rate on learn­ing curves

rmoehn5 Feb 2020 7:22 UTC
5 points
0 comments1 min readLW link

Phys­i­cal al­ign­ment—do you have it? Take a minute & check.

leggi5 Feb 2020 4:02 UTC
4 points
4 comments1 min readLW link

Open & Wel­come Thread—Fe­bru­ary 2020

ryan_b4 Feb 2020 20:49 UTC
17 points
114 comments1 min readLW link

Meta-Prefer­ence Utilitarianism

B Jacobs4 Feb 2020 20:24 UTC
10 points
30 comments1 min readLW link

Philo­soph­i­cal self-ratification

jessicata3 Feb 2020 22:48 UTC
23 points
13 comments5 min readLW link
(unstableontology.com)

Twenty-three AI al­ign­ment re­search pro­ject definitions

rmoehn3 Feb 2020 22:21 UTC
23 points
0 comments6 min readLW link