Preface

Eliezer Yudkowsky11 Mar 2015 19:00 UTC
779 points
15 comments4 min readLW link

Bi­ases: An Introduction

Rob Bensinger11 Mar 2015 19:00 UTC
277 points
14 comments5 min readLW link

Chap­ter 1: A Day of Very Low Probability

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
246 points
19 comments9 min readLW link

Ra­tion­al­ity: From AI to Zombies

Rob Bensinger13 Mar 2015 15:11 UTC
127 points
104 comments2 min readLW link

Chap­ter 2: Every­thing I Believe is False

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
96 points
7 comments5 min readLW link

Half-ass­ing it with ev­ery­thing you’ve got

So8res12 Mar 2015 7:00 UTC
87 points
0 comments8 min readLW link
(mindingourway.com)

Don’t Be Afraid of Ask­ing Per­son­ally Im­por­tant Ques­tions of Less Wrong

Evan_Gaensbauer17 Mar 2015 6:54 UTC
80 points
47 comments3 min readLW link

Chap­ter 5: The Fun­da­men­tal At­tri­bu­tion Error

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
78 points
11 comments11 min readLW link

Chap­ter 3: Com­par­ing Real­ity To Its Alternatives

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
77 points
5 comments6 min readLW link

Chap­ter 4: The Effi­cient Mar­ket Hypothesis

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
76 points
5 comments6 min readLW link

HPMOR Q&A by Eliezer at Wrap Party in Berkeley [Tran­scrip­tion]

sceaduwe16 Mar 2015 20:54 UTC
76 points
20 comments10 min readLW link

An­nounc­ing the Com­plice Less Wrong Study Hall

MalcolmOcean2 Mar 2015 23:37 UTC
75 points
19 comments2 min readLW link

Chap­ter 122: Some­thing to Pro­tect: Hermione Granger

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
74 points
16 comments41 min readLW link

Chap­ter 6: The Plan­ning Fallacy

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
70 points
7 comments32 min readLW link

Chap­ter 7: Reciprocation

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
69 points
12 comments36 min readLW link

An­nounce­ment: The Se­quences eBook will be re­leased in mid-March

Rob Bensinger3 Mar 2015 1:58 UTC
69 points
45 comments2 min readLW link

Poli­ti­cal top­ics at­tract par­ti­ci­pants in­clined to use the norms of main­stream poli­ti­cal de­bate, risk­ing a tip­ping point to lower qual­ity discussion

emr26 Mar 2015 0:14 UTC
67 points
71 comments1 min readLW link

An­swer to Job

Scott Alexander15 Mar 2015 18:02 UTC
66 points
7 comments4 min readLW link

Chap­ter 8: Pos­i­tive Bias

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
63 points
9 comments19 min readLW link

Op­ti­miza­tion and the In­tel­li­gence Explosion

Eliezer Yudkowsky11 Mar 2015 19:00 UTC
62 points
2 comments7 min readLW link

Chap­ter 10: Self Aware­ness, Part II

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
60 points
9 comments14 min readLW link

Can we talk about men­tal ill­ness?

riparianx8 Mar 2015 8:24 UTC
58 points
108 comments1 min readLW link

Chap­ter 45: Hu­man­ism, Pt 3

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
55 points
8 comments11 min readLW link

Twenty ba­sic rules for in­tel­li­gent money management

James_Miller19 Mar 2015 17:57 UTC
54 points
55 comments9 min readLW link

Cal­ibra­tion Test with database of 150,000+ questions

Nanashi14 Mar 2015 11:22 UTC
54 points
32 comments1 min readLW link

New fo­rum for MIRI re­search: In­tel­li­gent Agent Foun­da­tions Forum

orthonormal20 Mar 2015 0:35 UTC
53 points
43 comments1 min readLW link

Minds: An Introduction

Rob Bensinger11 Mar 2015 19:00 UTC
52 points
2 comments6 min readLW link

Chap­ter 9: Ti­tle Redacted, Part 1

Eliezer Yudkowsky14 Mar 2015 7:00 UTC
49 points
1 comment8 min readLW link

Defeat­ing the Villain

Zubon26 Mar 2015 21:43 UTC
48 points
38 comments2 min readLW link

A map of LWers—find mem­bers of the com­mu­nity liv­ing near you.

acchan13 Mar 2015 17:58 UTC
47 points
12 comments1 min readLW link

Ra­tion­al­ity: From AI to Zom­bies on­line read­ing group

Mark_Friedenbach21 Mar 2015 9:54 UTC
46 points
16 comments2 min readLW link

Ra­tion­al­ity: An Introduction

Rob Bensinger11 Mar 2015 19:00 UTC
45 points
9 comments8 min readLW link

Feel­ing Moral

Eliezer Yudkowsky11 Mar 2015 19:00 UTC
45 points
8 comments3 min readLW link

Chap­ter 14: The Un­known and the Unknowable

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
44 points
9 comments16 min readLW link

Chap­ter 12: Im­pulse Control

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
43 points
1 comment13 min readLW link

Chap­ter 13: Ask­ing the Wrong Questions

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
40 points
5 comments22 min readLW link

Chap­ter 16: Lat­eral Thinking

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
40 points
5 comments21 min readLW link

Chap­ter 20: Bayes’s Theorem

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
39 points
1 comment20 min readLW link

Slate Star Codex: al­ter­na­tive com­ment threads on LessWrong?

tog27 Mar 2015 21:05 UTC
39 points
32 comments1 min readLW link

Chap­ter 21: Rationalization

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
39 points
7 comments21 min readLW link

Chap­ter 19: De­layed Gratification

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
38 points
5 comments26 min readLW link

Chap­ter 18: Dom­i­nance Hierarchies

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
37 points
1 comment30 min readLW link

Fo­rum Digest: Cor­rigi­bil­ity, util­ity in­differ­ence, & re­lated con­trol ideas

Benya_Fallenstein24 Mar 2015 17:39 UTC
35 points
5 comments4 min readLW link

Chap­ter 15: Conscientiousness

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
35 points
2 comments12 min readLW link

Chap­ter 17: Lo­cat­ing the Hypothesis

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
35 points
3 comments42 min readLW link

Fu­ture of Life In­sti­tute ex­is­ten­tial risk news site

Vika19 Mar 2015 14:33 UTC
35 points
2 comments1 min readLW link

New(ish) AI con­trol ideas

Stuart_Armstrong5 Mar 2015 17:03 UTC
34 points
14 comments3 min readLW link

Chap­ter 11: Omake Files 1, 2, 3

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
32 points
3 comments9 min readLW link

Chap­ter 23: Belief in Belief

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
32 points
5 comments23 min readLW link

In Praise of Max­i­miz­ing – With Some Caveats

David Althaus15 Mar 2015 19:40 UTC
32 points
19 comments10 min readLW link