[Question] Start­ing a ra­tio­nal­ity dojo; any tips?

Bae's Theorem25 Feb 2019 23:56 UTC
6 points
0 comments1 min readLW link

[Question] Na­tive men­tal rep­re­sen­ta­tions that give huge speedups on prob­lems?

two-ox-heads25 Feb 2019 23:42 UTC
17 points
4 comments2 min readLW link

[Question] How good is a hu­man’s gut judge­ment at guess­ing some­one’s IQ?

habryka25 Feb 2019 21:23 UTC
50 points
21 comments1 min readLW link

LW2.0 Mailing List for Break­ing API Changes

Raemon25 Feb 2019 21:23 UTC
12 points
4 comments1 min readLW link

Grav­ity and the Ther­mo­dy­nam­ics of Hu­man Econ­omy and Existence

Lymphatera25 Feb 2019 21:11 UTC
−2 points
0 comments17 min readLW link

Dis­clos­ing the unsaid

rayraegah25 Feb 2019 21:10 UTC
10 points
5 comments1 min readLW link

Ly­dian song

Martin Sustrik25 Feb 2019 20:50 UTC
7 points
1 comment1 min readLW link
(250bpm.com)

Hu­mans Who Are Not Con­cen­trat­ing Are Not Gen­eral Intelligences

sarahconstantin25 Feb 2019 20:40 UTC
187 points
35 comments6 min readLW link1 review
(srconstantin.wordpress.com)

Ra­tion­al­ist Vi­pas­sana Med­i­ta­tion Retreat

DreamFlasher25 Feb 2019 10:10 UTC
24 points
2 comments1 min readLW link

In­for­mal Post on Motivation

Ruby23 Feb 2019 23:35 UTC
29 points
4 comments8 min readLW link

Can HCH epistem­i­cally dom­i­nate Ra­manu­jan?

zhukeepa23 Feb 2019 22:00 UTC
33 points
6 comments2 min readLW link

AI—In­tel­li­gence Real­is­ing Itself

TPATA23 Feb 2019 21:13 UTC
−4 points
0 comments3 min readLW link

Can an AI Have Feel­ings? or that satis­fy­ing crunch when you throw Alexa against a wall

JohnBuridan23 Feb 2019 17:48 UTC
8 points
19 comments4 min readLW link

“Other peo­ple are wrong” vs “I am right”

Buck22 Feb 2019 20:01 UTC
263 points
20 comments9 min readLW link2 reviews

Tiles: Re­port on Pro­gram­matic Code Generation

Martin Sustrik22 Feb 2019 0:10 UTC
5 points
5 comments6 min readLW link
(250bpm.com)

Align­ment Newslet­ter #46

Rohin Shah22 Feb 2019 0:10 UTC
12 points
0 comments9 min readLW link
(mailchi.mp)

[Question] How could “Kick­starter for Inad­e­quate Equil­ibria” be used for evil or turn out to be net-nega­tive?

Raemon21 Feb 2019 21:36 UTC
25 points
17 comments1 min readLW link

[Question] If a “Kick­starter for Inad­e­quate Equlibria” was built, do you have a con­crete in­ad­e­quate equil­ibrium to fix?

Raemon21 Feb 2019 21:32 UTC
56 points
40 comments1 min readLW link

Life, not a game

ArthurLidia21 Feb 2019 19:10 UTC
−10 points
2 comments2 min readLW link

Ideas for Next Gen­er­a­tion Pre­dic­tion Technologies

ozziegooen21 Feb 2019 11:38 UTC
22 points
25 comments7 min readLW link

[Question] What’s your fa­vorite LessWrong post?

pepe_prime21 Feb 2019 10:39 UTC
27 points
8 comments1 min readLW link

Thoughts on Hu­man Models

21 Feb 2019 9:10 UTC
126 points
32 comments10 min readLW link1 review

Two Small Ex­per­i­ments on GPT-2

jimrandomh21 Feb 2019 2:59 UTC
54 points
28 comments1 min readLW link

Pre­dic­tive Rea­son­ing Systems

ozziegooen20 Feb 2019 19:44 UTC
27 points
2 comments5 min readLW link

LessWrong DC: Age of En­light­en­ment

rusalkii20 Feb 2019 18:39 UTC
1 point
0 comments1 min readLW link

[Question] When does in­tro­spec­tion avoid the pit­falls of ru­mi­na­tion?

rk20 Feb 2019 14:14 UTC
24 points
12 comments1 min readLW link

What i learned giv­ing a lec­ture on NVC

Yoav Ravid20 Feb 2019 9:08 UTC
13 points
2 comments2 min readLW link

Pavlov Generalizes

abramdemski20 Feb 2019 9:03 UTC
67 points
4 comments7 min readLW link

Leukemia Has Won

Capybasilisk20 Feb 2019 7:11 UTC
1 point
2 comments1 min readLW link
(alex.blog)

[Question] Is there an as­surance-con­tract web­site in work?

Yoav Ravid20 Feb 2019 6:14 UTC
18 points
31 comments1 min readLW link

First steps of a ra­tio­nal­ity skill bootstrap

hamnox20 Feb 2019 0:57 UTC
10 points
0 comments6 min readLW link

Im­pact Prizes as an al­ter­na­tive to Cer­tifi­cates of Impact

ozziegooen20 Feb 2019 0:46 UTC
20 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

De-Bugged brains wanted

marcus_gabler19 Feb 2019 18:30 UTC
−16 points
17 comments1 min readLW link

[Link] OpenAI on why we need so­cial scientists

ioannes19 Feb 2019 16:59 UTC
14 points
3 comments1 min readLW link

Kocherga’s leaflet

berekuk19 Feb 2019 12:06 UTC
26 points
2 comments1 min readLW link

Blackmail

Zvi19 Feb 2019 3:50 UTC
133 points
55 comments16 min readLW link2 reviews
(thezvi.wordpress.com)

De­cel­er­at­ing: laser vs gun vs rocket

Stuart_Armstrong18 Feb 2019 23:21 UTC
22 points
16 comments4 min readLW link

Epistemic Tenure

Scott Garrabrant18 Feb 2019 22:56 UTC
89 points
27 comments3 min readLW link

[Question] A Strange Situation

Flange Finnegan18 Feb 2019 20:38 UTC
12 points
10 comments1 min readLW link

Im­pli­ca­tions of GPT-2

Gurkenglas18 Feb 2019 10:57 UTC
36 points
28 comments1 min readLW link

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson Quinn17 Feb 2019 23:45 UTC
42 points
14 comments6 min readLW link

Avoid­ing Jar­gon Confusion

Raemon17 Feb 2019 23:37 UTC
46 points
35 comments4 min readLW link

Robin Han­son on Lump­iness of AI Services

DanielFilan17 Feb 2019 23:08 UTC
15 points
2 comments2 min readLW link
(www.overcomingbias.com)

The Clock­maker’s Ar­gu­ment (But not Really)

GregorDeVillain17 Feb 2019 21:20 UTC
1 point
3 comments3 min readLW link

Can We Place Trust in Post-AGI Fore­cast­ing Eval­u­a­tions?

ozziegooen17 Feb 2019 19:20 UTC
22 points
16 comments2 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoise17 Feb 2019 18:28 UTC
6 points
2 comments1 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoise17 Feb 2019 18:27 UTC
6 points
0 comments1 min readLW link

Ex­traor­di­nary ethics re­quire ex­traor­di­nary arguments

aaq17 Feb 2019 14:59 UTC
26 points
6 comments2 min readLW link

Limit­ing an AGI’s Con­text Temporally

EulersApprentice17 Feb 2019 3:29 UTC
5 points
11 comments1 min readLW link

Ma­jor Dona­tion: Long Term Fu­ture Fund Ap­pli­ca­tion Ex­tended 1 Week

habryka16 Feb 2019 23:30 UTC
42 points
3 comments1 min readLW link