De­cel­er­at­ing: laser vs gun vs rocket

Stuart_Armstrong18 Feb 2019 23:21 UTC
22 points
16 comments4 min readLW link

Epistemic Tenure

Scott Garrabrant18 Feb 2019 22:56 UTC
89 points
27 comments3 min readLW link

[Question] A Strange Situation

Flange Finnegan18 Feb 2019 20:38 UTC
12 points
10 comments1 min readLW link

Im­pli­ca­tions of GPT-2

Gurkenglas18 Feb 2019 10:57 UTC
36 points
28 comments1 min readLW link

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson Quinn17 Feb 2019 23:45 UTC
42 points
14 comments6 min readLW link

Avoid­ing Jar­gon Confusion

Raemon17 Feb 2019 23:37 UTC
46 points
35 comments4 min readLW link

Robin Han­son on Lump­iness of AI Services

DanielFilan17 Feb 2019 23:08 UTC
15 points
2 comments2 min readLW link
(www.overcomingbias.com)

The Clock­maker’s Ar­gu­ment (But not Really)

GregorDeVillain17 Feb 2019 21:20 UTC
1 point
3 comments3 min readLW link

Can We Place Trust in Post-AGI Fore­cast­ing Eval­u­a­tions?

ozziegooen17 Feb 2019 19:20 UTC
22 points
16 comments2 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoise17 Feb 2019 18:28 UTC
6 points
2 comments1 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoise17 Feb 2019 18:27 UTC
6 points
0 comments1 min readLW link

Ex­traor­di­nary ethics re­quire ex­traor­di­nary arguments

aaq17 Feb 2019 14:59 UTC
26 points
6 comments2 min readLW link

Limit­ing an AGI’s Con­text Temporally

EulersApprentice17 Feb 2019 3:29 UTC
5 points
11 comments1 min readLW link

Ma­jor Dona­tion: Long Term Fu­ture Fund Ap­pli­ca­tion Ex­tended 1 Week

habryka16 Feb 2019 23:30 UTC
42 points
3 comments1 min readLW link

Games in Kocherga club: Fal­la­cy­ma­nia, Tower of Chaos, Scien­tific Discovery

Alexander23016 Feb 2019 22:29 UTC
3 points
2 comments1 min readLW link

[Question] Is there a way to hire aca­demics hourly?

Ixiel16 Feb 2019 14:21 UTC
6 points
2 comments1 min readLW link

Grace­ful Shutdown

Martin Sustrik16 Feb 2019 11:30 UTC
10 points
4 comments13 min readLW link
(250bpm.com)

[Question] Why didn’t Agoric Com­put­ing be­come pop­u­lar?

Wei Dai16 Feb 2019 6:19 UTC
52 points
22 comments2 min readLW link

Ped­a­gogy as Struggle

lifelonglearner16 Feb 2019 2:12 UTC
13 points
9 comments2 min readLW link

How the MtG Color Wheel Ex­plains AI Safety

Scott Garrabrant15 Feb 2019 23:42 UTC
65 points
4 comments6 min readLW link

Some dis­junc­tive rea­sons for ur­gency on AI risk

Wei Dai15 Feb 2019 20:43 UTC
36 points
24 comments1 min readLW link

So you want to be a wizard

NaiveTortoise15 Feb 2019 15:43 UTC
16 points
0 comments1 min readLW link
(jvns.ca)

Co­op­er­a­tion is for Winners

Jacob Falkovich15 Feb 2019 14:58 UTC
21 points
6 comments4 min readLW link

Quan­tify­ing an­thropic effects on the Fermi paradox

Lukas Finnveden15 Feb 2019 10:51 UTC
29 points
5 comments27 min readLW link

[Question] How does OpenAI’s lan­guage model af­fect our AI timeline es­ti­mates?

jimrandomh15 Feb 2019 3:11 UTC
50 points
7 comments1 min readLW link

Has The Func­tion To Sort Posts By Votes Stopped Work­ing?

Capybasilisk14 Feb 2019 19:14 UTC
1 point
3 comments1 min readLW link

[Question] Who owns OpenAI’s new lan­guage model?

ioannes14 Feb 2019 17:51 UTC
16 points
9 comments1 min readLW link

The Pre­dic­tion Pyra­mid: Why Fun­da­men­tal Work is Needed for Pre­dic­tion Work

ozziegooen14 Feb 2019 16:21 UTC
43 points
15 comments3 min readLW link

Short story: An AGI’s Repug­nant Physics Experiment

ozziegooen14 Feb 2019 14:46 UTC
9 points
5 comments1 min readLW link

New York Res­tau­rants I Love: Breakfast

Zvi14 Feb 2019 13:10 UTC
10 points
3 comments8 min readLW link
(thezvi.wordpress.com)

[Question] Are there doc­u­men­taries on ra­tio­nal­ity?

Yoav Ravid14 Feb 2019 11:34 UTC
12 points
5 comments1 min readLW link

Align­ment Newslet­ter #45

Rohin Shah14 Feb 2019 2:10 UTC
25 points
2 comments8 min readLW link
(mailchi.mp)

Three Kinds of Re­search Doc­u­ments: Ex­plo­ra­tion, Ex­pla­na­tion, Academic

ozziegooen13 Feb 2019 21:25 UTC
22 points
18 comments3 min readLW link

Hu­mans in­ter­pret­ing humans

Stuart_Armstrong13 Feb 2019 19:03 UTC
12 points
1 comment2 min readLW link

An­chor­ing vs Taste: a model

Stuart_Armstrong13 Feb 2019 19:03 UTC
10 points
0 comments2 min readLW link

[Question] In­di­vi­d­ual profit-shar­ing?

ioannes13 Feb 2019 17:58 UTC
10 points
8 comments1 min readLW link

The RAIN Frame­work for In­for­ma­tional Effectiveness

ozziegooen13 Feb 2019 12:54 UTC
37 points
16 comments6 min readLW link

On Long and In­sight­ful Posts

Qria13 Feb 2019 3:52 UTC
19 points
3 comments1 min readLW link

Lay­ers of Ex­per­tise and the Curse of Curiosity

Gyrodiot12 Feb 2019 23:41 UTC
19 points
1 comment6 min readLW link

Nuances with as­crip­tion universality

evhub12 Feb 2019 23:38 UTC
20 points
1 comment2 min readLW link

Learn­ing prefer­ences by look­ing at the world

Rohin Shah12 Feb 2019 22:25 UTC
43 points
10 comments7 min readLW link
(bair.berkeley.edu)

Func­tional silence: com­mu­ni­ca­tion that min­i­mizes change of re­ceiver’s beliefs

chaosmage12 Feb 2019 21:32 UTC
27 points
5 comments2 min readLW link

Ar­gu­ments for moral indefinability

Richard_Ngo12 Feb 2019 10:40 UTC
50 points
10 comments7 min readLW link
(thinkingcomplete.blogspot.com)

Art: A Ra­tion­al­ist’s Take?

schrodingart12 Feb 2019 5:07 UTC
2 points
4 comments6 min readLW link

Lan­guage, the Key to Everything

chris8217912 Feb 2019 5:06 UTC
−2 points
2 comments4 min readLW link

Tri­an­gle SSC Meetup-February

willbobaggins12 Feb 2019 3:07 UTC
1 point
0 comments1 min readLW link

Would I think for ten thou­sand years?

Stuart_Armstrong11 Feb 2019 19:37 UTC
25 points
13 comments1 min readLW link

“Nor­ma­tive as­sump­tions” need not be complex

Stuart_Armstrong11 Feb 2019 19:03 UTC
11 points
0 comments2 min readLW link

Emo­tional Cli­mate Change—an in­con­ve­nient idea

marcus_gabler11 Feb 2019 17:55 UTC
−30 points
8 comments2 min readLW link

Co­her­ent be­havi­our in the real world is an in­co­her­ent concept

Richard_Ngo11 Feb 2019 17:00 UTC
51 points
17 comments9 min readLW link