Book Re­view: De­ci­sive by Chip and Dan Heath

Ian David MossFeb 6, 2020, 8:15 PM
4 points
0 comments2 min readLW link
(medium.com)

Bayes-Up: An App for Shar­ing Bayesian-MCQ

Louis FauconFeb 6, 2020, 7:01 PM
53 points
9 comments1 min readLW link

Mazes Se­quence Roundup: Fi­nal Thoughts and Paths Forward

ZviFeb 6, 2020, 4:10 PM
88 points
28 comments14 min readLW link1 review
(thezvi.wordpress.com)

Plau­si­bly, al­most ev­ery pow­er­ful al­gorithm would be manipulative

Stuart_ArmstrongFeb 6, 2020, 11:50 AM
38 points
25 comments3 min readLW link

Some quick notes on hand hygiene

willbradshawFeb 6, 2020, 2:47 AM
68 points
52 comments3 min readLW link

Po­ten­tial Re­search Topic: Vingean Reflec­tion, Value Align­ment and Aspiration

Vaughn PapenhausenFeb 6, 2020, 1:09 AM
15 points
4 comments4 min readLW link

Syn­the­siz­ing am­plifi­ca­tion and debate

evhubFeb 5, 2020, 10:53 PM
33 points
10 comments4 min readLW link

Wri­teup: Progress on AI Safety via Debate

Feb 5, 2020, 9:04 PM
103 points
18 comments33 min readLW link

[AN #85]: The nor­ma­tive ques­tions we should be ask­ing for AI al­ign­ment, and a sur­pris­ingly good chatbot

Rohin ShahFeb 5, 2020, 6:20 PM
14 points
2 comments7 min readLW link
(mailchi.mp)

The Ad­ven­ture: a new Utopia story

Stuart_ArmstrongFeb 5, 2020, 4:50 PM
101 points
37 comments51 min readLW link

“But that’s your job”: why or­gani­sa­tions can work

Stuart_ArmstrongFeb 5, 2020, 12:25 PM
77 points
12 comments4 min readLW link

Train­ing a tiny SupAmp model on easy tasks. The in­fluence of failure rate on learn­ing curves

rmoehnFeb 5, 2020, 7:22 AM
5 points
0 comments1 min readLW link

Phys­i­cal al­ign­ment—do you have it? Take a minute & check.

leggiFeb 5, 2020, 4:02 AM
4 points
4 comments1 min readLW link

Open & Wel­come Thread—Fe­bru­ary 2020

ryan_bFeb 4, 2020, 8:49 PM
17 points
114 comments1 min readLW link

Meta-Prefer­ence Utilitarianism

B JacobsFeb 4, 2020, 8:24 PM
10 points
30 comments1 min readLW link

Philo­soph­i­cal self-ratification

jessicataFeb 3, 2020, 10:48 PM
23 points
13 comments5 min readLW link
(unstableontology.com)

Twenty-three AI al­ign­ment re­search pro­ject definitions

rmoehnFeb 3, 2020, 10:21 PM
23 points
0 comments6 min readLW link

Ab­sent co­or­di­na­tion, fu­ture tech­nol­ogy will cause hu­man extinction

Jeffrey LadishFeb 3, 2020, 9:52 PM
21 points
12 comments5 min readLW link

Long Now, and Cul­ture vs Artifacts

RaemonFeb 3, 2020, 9:49 PM
26 points
3 comments6 min readLW link

[Question] Look­ing for books about soft­ware en­g­ineer­ing as a field

mingyuanFeb 3, 2020, 9:49 PM
14 points
15 comments1 min readLW link

Cat­e­gory The­ory Without The Baggage

johnswentworthFeb 3, 2020, 8:03 PM
139 points
51 comments13 min readLW link

Pro­tect­ing Large Pro­jects Against Mazedom

ZviFeb 3, 2020, 5:10 PM
78 points
11 comments4 min readLW link1 review
(thezvi.wordpress.com)

Pes­simism About Un­known Un­knowns In­spires Conservatism

michaelcohenFeb 3, 2020, 2:48 PM
41 points
2 comments5 min readLW link

Map Of Effec­tive Altruism

Scott AlexanderFeb 3, 2020, 6:20 AM
17 points
1 comment1 min readLW link
(slatestarcodex.com)

UML IX: Ker­nels and Boosting

Rafael HarthFeb 2, 2020, 9:51 PM
13 points
1 comment10 min readLW link

A point of clar­ifi­ca­tion on in­fo­haz­ard terminology

eukaryoteFeb 2, 2020, 5:43 PM
52 points
21 comments2 min readLW link
(eukaryotewritesblog.com)

[Question] Money isn’t real. When you donate money to a char­ity, how does it ac­tu­ally help?

DagonFeb 2, 2020, 5:03 PM
15 points
28 comments1 min readLW link

[Link] Beyond the hill: thoughts on on­tolo­gies for think­ing, es­say-com­plete­ness and fore­cast­ing

Bird ConceptFeb 2, 2020, 12:39 PM
33 points
6 comments1 min readLW link

The Case for Ar­tifi­cial Ex­pert In­tel­li­gence (AXI): What lies be­tween nar­row and gen­eral AI?

Yuli_BanFeb 2, 2020, 5:55 AM
8 points
2 comments6 min readLW link

“Me­mento Mori”, Said The Confessor

namespaceFeb 2, 2020, 3:37 AM
34 points
4 comments1 min readLW link
(www.thelastrationalist.com)

Bay Win­ter Sols­tice seat­ing-scarcity

RaemonFeb 1, 2020, 11:09 PM
2 points
3 comments2 min readLW link

The case for lifel­og­ging as life extension

Matthew BarnettFeb 1, 2020, 9:56 PM
51 points
17 comments3 min readLW link1 review

What Money Can­not Buy

johnswentworthFeb 1, 2020, 8:11 PM
348 points
53 comments4 min readLW link1 review

Effec­tive Altru­ism QALY work­shop ma­te­ri­als & out­line (and Jan 13 ’19 meetup notes)

samstowersFeb 1, 2020, 4:42 AM
10 points
1 comment3 min readLW link

More Rhythm Options

jefftkFeb 1, 2020, 3:10 AM
1 point
0 comments1 min readLW link
(www.jefftk.com)

[Question] In­stru­men­tal Oc­cam?

abramdemskiJan 31, 2020, 7:27 PM
30 points
15 comments1 min readLW link

REVISED: A drown­ing child is hard to find

BenquoJan 31, 2020, 6:07 PM
22 points
35 comments1 min readLW link
(benjaminrosshoffman.com)

Jan­uary 2020 gw­ern.net newsletter

gwernJan 31, 2020, 6:04 PM
19 points
0 commentsLW link
(www.gwern.net)

Create a Full Alter­na­tive Stack

ZviJan 31, 2020, 5:10 PM
84 points
14 comments6 min readLW link1 review
(thezvi.wordpress.com)

[Link] Ig­no­rance, a skil­led practice

romeostevensitJan 31, 2020, 4:21 PM
16 points
9 comments2 min readLW link

[ELDR Tac­tics] Con­sider switch­ing to (mostly) de­caf.

aaqJan 31, 2020, 3:09 PM
29 points
2 comments4 min readLW link

[Question] Ex­ist­ing work on cre­at­ing ter­minol­ogy & names?

ozziegooenJan 31, 2020, 12:16 PM
10 points
6 comments1 min readLW link

Book Re­view: Hu­man Compatible

Scott AlexanderJan 31, 2020, 5:20 AM
78 points
6 comments16 min readLW link
(slatestarcodex.com)

HALIFAX SSC MEETUP—FEB. 1

intersticeJan 31, 2020, 3:59 AM
4 points
0 comments1 min readLW link

High-pre­ci­sion claims may be re­futed with­out be­ing re­placed with other high-pre­ci­sion claims

jessicataJan 30, 2020, 11:08 PM
56 points
31 comments3 min readLW link
(unstableontology.com)

[Question] how has this fo­rum changed your life?

Jon RonsonJan 30, 2020, 9:54 PM
26 points
20 comments1 min readLW link

Ar­tifi­cial In­tel­li­gence, Values and Alignment

Gordon Seidoh WorleyJan 30, 2020, 7:48 PM
13 points
1 comment1 min readLW link
(deepmind.com)

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben PaceJan 30, 2020, 5:07 AM
37 points
9 comments27 min readLW link

Value uncertainty

MichaelAJan 29, 2020, 8:16 PM
20 points
3 comments14 min readLW link

Towards de­con­fus­ing values

Gordon Seidoh WorleyJan 29, 2020, 7:28 PM
12 points
4 comments7 min readLW link