“Me­mento Mori”, Said The Confessor

namespaceFeb 2, 2020, 3:37 AM
34 points
4 comments1 min readLW link
(www.thelastrationalist.com)

Please Help Me­tac­u­lus Fore­cast COVID-19

AABoylesFeb 14, 2020, 5:31 PM
34 points
0 comments1 min readLW link
(www.metaculus.com)

The Rea­son­able Effec­tive­ness of Math­e­mat­ics or: AI vs sandwiches

Vanessa KosoyFeb 14, 2020, 6:46 PM
34 points
8 comments9 min readLW link1 review

Train­ing Regime Day 4: Murphyjitsu

Mark XuFeb 18, 2020, 5:33 PM
34 points
0 comments7 min readLW link

[Link] Beyond the hill: thoughts on on­tolo­gies for think­ing, es­say-com­plete­ness and fore­cast­ing

Bird ConceptFeb 2, 2020, 12:39 PM
33 points
6 comments1 min readLW link

On un­fix­ably un­safe AGI architectures

Steven ByrnesFeb 19, 2020, 9:16 PM
33 points
8 comments5 min readLW link

Syn­the­siz­ing am­plifi­ca­tion and debate

evhubFeb 5, 2020, 10:53 PM
33 points
10 comments4 min readLW link

Re­fac­tor­ing EMH – Thoughts fol­low­ing the lat­est mar­ket crash

Alex_ShleizerFeb 28, 2020, 3:53 PM
33 points
16 comments3 min readLW link

Train­ing Regime Day 1: What is ap­plied ra­tio­nal­ity?

Mark XuFeb 15, 2020, 9:03 PM
33 points
7 comments4 min readLW link

Rea­sons for Ex­cite­ment about Im­pact of Im­pact Mea­sure Research

TurnTroutFeb 27, 2020, 9:42 PM
33 points
8 comments4 min readLW link

We Want MoR (HPMOR Dis­cus­sion Pod­cast) Com­pletes Book One

moridinamaelFeb 19, 2020, 12:34 AM
31 points
0 comments1 min readLW link

Train­ing Regime Day 8: Noticing

Mark XuFeb 22, 2020, 7:47 PM
31 points
7 comments3 min readLW link2 reviews

Train­ing Regime Day 2: Search­ing for bugs

Mark XuFeb 16, 2020, 5:16 PM
31 points
2 comments3 min readLW link

Subagents and im­pact mea­sures, full and fully illustrated

Stuart_ArmstrongFeb 24, 2020, 1:12 PM
31 points
14 comments17 min readLW link

Con­tin­u­ous Im­prove­ment: In­sights from ‘Topol­ogy’

TurnTroutFeb 22, 2020, 9:58 PM
30 points
4 comments4 min readLW link

[Question] At what point should CFAR stop hold­ing work­shops due to COVID-19?

Adam SchollFeb 25, 2020, 9:59 AM
29 points
11 comments1 min readLW link

[Question] How do you sur­vive in the hu­man­i­ties?

polymathwannabeFeb 20, 2020, 3:19 PM
29 points
20 comments2 min readLW link

Draft: Models of Risks of De­liv­ery Un­der Coronavirus

ElizabethFeb 28, 2020, 4:10 AM
28 points
5 comments1 min readLW link
(acesounderglass.com)

Cu­ri­os­ity Killed the Cat and the Asymp­tot­i­cally Op­ti­mal Agent

michaelcohenFeb 20, 2020, 5:28 PM
28 points
15 comments1 min readLW link

At­tain­able Utility Preser­va­tion: Scal­ing to Superhuman

TurnTroutFeb 27, 2020, 12:52 AM
28 points
22 comments8 min readLW link

Other ver­sions of “No free lunch in value learn­ing”

Stuart_ArmstrongFeb 25, 2020, 2:25 PM
28 points
0 comments1 min readLW link

How Low Should Fruit Hang Be­fore We Pick It?

TurnTroutFeb 25, 2020, 2:08 AM
28 points
9 comments12 min readLW link

Why SENS makes sense

emanuele ascaniFeb 22, 2020, 4:28 PM
28 points
2 comments31 min readLW link

[AN #87]: What might hap­pen as deep learn­ing scales even fur­ther?

Rohin ShahFeb 19, 2020, 6:20 PM
28 points
0 comments4 min readLW link
(mailchi.mp)

Will AI un­dergo dis­con­tin­u­ous progress?

Sammy MartinFeb 21, 2020, 10:16 PM
27 points
21 comments20 min readLW link

Memetic down­side risks: How ideas can evolve and cause harm

Feb 25, 2020, 7:47 PM
27 points
3 comments15 min readLW link

Re­sponse to Oren Etz­ioni’s “How to know if ar­tifi­cial in­tel­li­gence is about to de­stroy civ­i­liza­tion”

Daniel KokotajloFeb 27, 2020, 6:10 PM
27 points
5 comments8 min readLW link

Long Now, and Cul­ture vs Artifacts

RaemonFeb 3, 2020, 9:49 PM
26 points
3 comments6 min readLW link

Re­source on al­co­hol problems

juliawiseFeb 27, 2020, 6:17 PM
26 points
3 comments4 min readLW link

On the falsifi­a­bil­ity of hy­per­com­pu­ta­tion, part 2: finite in­put streams

jessicataFeb 17, 2020, 3:51 AM
26 points
7 comments4 min readLW link
(unstableontology.com)

Train­ing Regime Day 5: TAPs

Mark XuFeb 19, 2020, 6:11 PM
26 points
0 comments7 min readLW link

Where’s the Tur­ing Ma­chine? A step to­wards On­tol­ogy Identification

adamShimiFeb 26, 2020, 5:10 PM
25 points
0 comments8 min readLW link

Train­ing Regime Day 3: Tips and Tricks

Mark XuFeb 17, 2020, 6:53 PM
24 points
5 comments11 min readLW link

[Link and com­men­tary] The Offense-Defense Balance of Scien­tific Knowl­edge: Does Pub­lish­ing AI Re­search Re­duce Mi­suse?

MichaelAFeb 16, 2020, 7:56 PM
24 points
4 comments3 min readLW link

Time Binders

SlimepriestessFeb 24, 2020, 9:55 AM
24 points
10 comments1 min readLW link
(hivewired.wordpress.com)

On the falsifi­a­bil­ity of hypercomputation

jessicataFeb 7, 2020, 8:16 AM
24 points
4 comments4 min readLW link
(unstableontology.com)

Gricean com­mu­ni­ca­tion and meta-preferences

Charlie SteinerFeb 10, 2020, 5:05 AM
24 points
0 comments3 min readLW link

[Question] What to make of Aubrey de Grey’s pre­dic­tion?

Rafael HarthFeb 28, 2020, 7:25 PM
23 points
18 comments1 min readLW link

Philo­soph­i­cal self-ratification

jessicataFeb 3, 2020, 10:48 PM
23 points
13 comments5 min readLW link
(unstableontology.com)

Map­ping down­side risks and in­for­ma­tion hazards

Feb 20, 2020, 2:46 PM
23 points
0 comments9 min readLW link

Mak­ing Sense of Coron­avirus Stats

jmhFeb 20, 2020, 3:12 PM
23 points
28 comments2 min readLW link

Twenty-three AI al­ign­ment re­search pro­ject definitions

rmoehnFeb 3, 2020, 10:21 PM
23 points
0 comments6 min readLW link

If I were a well-in­ten­tioned AI… III: Ex­tremal Goodhart

Stuart_ArmstrongFeb 28, 2020, 11:24 AM
22 points
0 comments5 min readLW link

Pre­dic­tive cod­ing and mo­tor control

Steven ByrnesFeb 23, 2020, 2:04 AM
22 points
3 comments4 min readLW link

[Pro­duc­tivity] How not to use “Im­por­tant //​ Not Ur­gent”

aaqFeb 17, 2020, 11:42 PM
22 points
0 comments1 min readLW link

Si­mu­la­tion of tech­nolog­i­cal progress (work in progress)

Daniel KokotajloFeb 10, 2020, 8:39 PM
21 points
9 comments5 min readLW link

Ab­sent co­or­di­na­tion, fu­ture tech­nol­ogy will cause hu­man extinction

Jeffrey LadishFeb 3, 2020, 9:52 PM
21 points
12 comments5 min readLW link

Ab­stract Plans Lead to Failure

Chris_LeongFeb 27, 2020, 9:20 PM
21 points
0 comments1 min readLW link

Wire­head­ing and discontinuity

Michele CampoloFeb 18, 2020, 10:49 AM
21 points
4 comments3 min readLW link

Goal-di­rected = Model-based RL?

adamShimiFeb 20, 2020, 7:13 PM
21 points
10 comments3 min readLW link