“Other peo­ple are wrong” vs “I am right”

Buck22 Feb 2019 20:01 UTC
262 points
20 comments9 min readLW link2 reviews

Rule Thinkers In, Not Out

Scott Alexander27 Feb 2019 2:40 UTC
226 points
67 comments4 min readLW link4 reviews
(slatestarcodex.com)

Hu­mans Who Are Not Con­cen­trat­ing Are Not Gen­eral Intelligences

sarahconstantin25 Feb 2019 20:40 UTC
187 points
35 comments6 min readLW link1 review
(srconstantin.wordpress.com)

Un­con­scious Economics

jacobjacob27 Feb 2019 12:58 UTC
138 points
30 comments4 min readLW link3 reviews

Blackmail

Zvi19 Feb 2019 3:50 UTC
133 points
55 comments16 min readLW link2 reviews
(thezvi.wordpress.com)

Thoughts on Hu­man Models

21 Feb 2019 9:10 UTC
126 points
32 comments10 min readLW link1 review

The Tale of Alice Al­most: Strate­gies for Deal­ing With Pretty Good People

sarahconstantin27 Feb 2019 19:34 UTC
116 points
6 comments6 min readLW link2 reviews
(srconstantin.wordpress.com)

Epistemic Tenure

Scott Garrabrant18 Feb 2019 22:56 UTC
89 points
27 comments3 min readLW link

Prob­a­bil­ity space has 2 metrics

Donald Hobson10 Feb 2019 0:28 UTC
88 points
11 comments1 min readLW link

Some Thoughts on Metaphilosophy

Wei Dai10 Feb 2019 0:28 UTC
76 points
30 comments4 min readLW link

Test Cases for Im­pact Reg­u­lari­sa­tion Methods

DanielFilan6 Feb 2019 21:50 UTC
72 points
5 comments13 min readLW link
(danielfilan.com)

[Question] How does Gra­di­ent Des­cent In­ter­act with Good­hart?

Scott Garrabrant2 Feb 2019 0:14 UTC
68 points
19 comments4 min readLW link

The Case for a Big­ger Audience

John_Maxwell9 Feb 2019 7:22 UTC
68 points
58 comments2 min readLW link

RAISE is launch­ing their MVP

null26 Feb 2019 11:45 UTC
67 points
1 comment1 min readLW link

Pavlov Generalizes

abramdemski20 Feb 2019 9:03 UTC
67 points
4 comments7 min readLW link

How the MtG Color Wheel Ex­plains AI Safety

Scott Garrabrant15 Feb 2019 23:42 UTC
65 points
4 comments6 min readLW link

When to use quantilization

RyanCarey5 Feb 2019 17:17 UTC
65 points
5 comments4 min readLW link

The Ar­gu­ment from Philo­soph­i­cal Difficulty

Wei Dai10 Feb 2019 0:28 UTC
59 points
31 comments1 min readLW link

The Ham­ming Question

Raemon8 Feb 2019 19:34 UTC
59 points
38 comments3 min readLW link

[Question] If a “Kick­starter for Inad­e­quate Equlibria” was built, do you have a con­crete in­ad­e­quate equil­ibrium to fix?

Raemon21 Feb 2019 21:32 UTC
55 points
39 comments1 min readLW link

Two Small Ex­per­i­ments on GPT-2

jimrandomh21 Feb 2019 2:59 UTC
54 points
28 comments1 min readLW link

[Question] Why didn’t Agoric Com­put­ing be­come pop­u­lar?

Wei Dai16 Feb 2019 6:19 UTC
52 points
22 comments2 min readLW link

Con­clu­sion to the se­quence on value learning

Rohin Shah3 Feb 2019 21:05 UTC
51 points
20 comments5 min readLW link

Co­her­ent be­havi­our in the real world is an in­co­her­ent concept

Richard_Ngo11 Feb 2019 17:00 UTC
51 points
17 comments9 min readLW link

Ur­gent & im­por­tant: How (not) to do your to-do list

bfinn1 Feb 2019 17:44 UTC
51 points
20 comments13 min readLW link

[Question] How does OpenAI’s lan­guage model af­fect our AI timeline es­ti­mates?

jimrandomh15 Feb 2019 3:11 UTC
50 points
7 comments1 min readLW link

[Question] How good is a hu­man’s gut judge­ment at guess­ing some­one’s IQ?

habryka25 Feb 2019 21:23 UTC
50 points
21 comments1 min readLW link

Ar­gu­ments for moral indefinability

Richard_Ngo12 Feb 2019 10:40 UTC
50 points
10 comments7 min readLW link
(thinkingcomplete.blogspot.com)

Avoid­ing Jar­gon Confusion

Raemon17 Feb 2019 23:37 UTC
46 points
35 comments4 min readLW link

(notes on) Policy Desider­ata for Su­per­in­tel­li­gent AI: A Vec­tor Field Approach

Ben Pace4 Feb 2019 22:08 UTC
43 points
5 comments7 min readLW link

The Pre­dic­tion Pyra­mid: Why Fun­da­men­tal Work is Needed for Pre­dic­tion Work

ozziegooen14 Feb 2019 16:21 UTC
43 points
15 comments3 min readLW link

Learn­ing prefer­ences by look­ing at the world

Rohin Shah12 Feb 2019 22:25 UTC
43 points
10 comments7 min readLW link
(bair.berkeley.edu)

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson Quinn17 Feb 2019 23:45 UTC
42 points
14 comments6 min readLW link

HCH is not just Me­chan­i­cal Turk

William_S9 Feb 2019 0:46 UTC
42 points
6 comments3 min readLW link

Ma­jor Dona­tion: Long Term Fu­ture Fund Ap­pli­ca­tion Ex­tended 1 Week

habryka16 Feb 2019 23:30 UTC
42 points
3 comments1 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie Steiner5 Feb 2019 19:34 UTC
40 points
20 comments3 min readLW link

The RAIN Frame­work for In­for­ma­tional Effectiveness

ozziegooen13 Feb 2019 12:54 UTC
37 points
16 comments6 min readLW link

Know­ing I’m Be­ing Tricked is Barely Enough

Elizabeth26 Feb 2019 17:50 UTC
37 points
10 comments2 min readLW link
(acesounderglass.com)

How to get value learn­ing and refer­ence wrong

Charlie Steiner26 Feb 2019 20:22 UTC
37 points
2 comments6 min readLW link

Im­pli­ca­tions of GPT-2

Gurkenglas18 Feb 2019 10:57 UTC
36 points
28 comments1 min readLW link

Some dis­junc­tive rea­sons for ur­gency on AI risk

Wei Dai15 Feb 2019 20:43 UTC
36 points
24 comments1 min readLW link

New ver­sions of posts in “Map and Ter­ri­tory” and “How To Ac­tu­ally Change Your Mind” are up (also, new re­vi­sion sys­tem)

habryka26 Feb 2019 3:17 UTC
36 points
3 comments1 min readLW link

[Question] When should we ex­pect the ed­u­ca­tion bub­ble to pop? How can we short it?

jacobjacob9 Feb 2019 21:39 UTC
35 points
12 comments1 min readLW link

Is the World Get­ting Bet­ter? A brief sum­mary of re­cent debate

ErickBall6 Feb 2019 17:38 UTC
35 points
8 comments2 min readLW link
(capx.co)

Drexler on AI Risk

PeterMcCluskey1 Feb 2019 5:11 UTC
35 points
10 comments9 min readLW link
(www.bayesianinvestor.com)

EA grants available (to in­di­vi­d­u­als)

Jameson Quinn7 Feb 2019 15:17 UTC
34 points
8 comments3 min readLW link

Can HCH epistem­i­cally dom­i­nate Ra­manu­jan?

zhukeepa23 Feb 2019 22:00 UTC
33 points
6 comments2 min readLW link

[Question] Why do you re­ject nega­tive util­i­tar­i­anism?

Teo Ajantaival11 Feb 2019 15:38 UTC
32 points
27 comments1 min readLW link

Com­plex­ity Penalties in Statis­ti­cal Learning

michael_h6 Feb 2019 4:13 UTC
31 points
3 comments6 min readLW link

[Question] Is LessWrong a “clas­sic style in­tel­lec­tual world”?

Gordon Seidoh Worley26 Feb 2019 21:33 UTC
29 points
6 comments1 min readLW link