Against nam­ing things, and so on

whales15 Oct 2017 23:48 UTC
24 points
8 comments4 min readLW link

Why no to­tal win­ner?

Paul Crowley15 Oct 2017 22:01 UTC
36 points
19 comments2 min readLW link

Iden­tities are [Sub­con­scious] Strategies

Ruby15 Oct 2017 18:10 UTC
39 points
1 comment4 min readLW link

Build­ing an civil­i­sa­tion scale OODA loop for the prob­lem of AGI

whpearson15 Oct 2017 17:56 UTC
3 points
2 comments1 min readLW link

The Fish-Head Monk

Zvi14 Oct 2017 12:10 UTC
4 points
2 comments2 min readLW link
(thezvi.wordpress.com)

Begin­ners’ Meditation

Zvi14 Oct 2017 12:10 UTC
2 points
5 comments3 min readLW link
(thezvi.wordpress.com)

Wel­come to the World

Zvi14 Oct 2017 12:10 UTC
9 points
3 comments1 min readLW link
(thezvi.wordpress.com)

You can never be uni­ver­sally inclusive

Kaj_Sotala14 Oct 2017 11:30 UTC
36 points
9 comments2 min readLW link
(kajsotala.fi)

“Fo­cus­ing,” for skep­tics.

Conor Moreton14 Oct 2017 7:00 UTC
102 points
27 comments7 min readLW link
(medium.com)

My At­tempt to Jus­tify The Prin­ci­ple of In­suffi­cient Rea­son (PIR)

DragonGod14 Oct 2017 6:12 UTC
0 points
1 comment1 min readLW link
(mathb.in)

Ac­ti­vated so­cial lo­gin, tem­porar­ily de­ac­ti­vated nor­mal signup

habryka14 Oct 2017 3:59 UTC
5 points
11 comments1 min readLW link

In­stru­men­tal Ra­tion­al­ity 5: In­ter­lude II

lifelonglearner14 Oct 2017 2:05 UTC
7 points
1 comment12 min readLW link

Offload­ing Ex­ec­u­tive Func­tion­ing to Morality

weft14 Oct 2017 1:43 UTC
13 points
6 comments2 min readLW link

Oxford Pri­ori­ti­sa­tion Pro­ject Review

[deleted]13 Oct 2017 23:07 UTC
11 points
6 comments23 min readLW link

Rare Ex­cep­tion or Com­mon Exception

weft13 Oct 2017 22:02 UTC
14 points
3 comments1 min readLW link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

Eliezer Yudkowsky13 Oct 2017 21:38 UTC
148 points
72 comments25 min readLW link

Creat­ing Space to Cul­ti­vate Skill

hamnox13 Oct 2017 15:51 UTC
9 points
8 comments6 min readLW link

Hu­mans can be as­signed any val­ues what­so­ever...

Stuart_Armstrong13 Oct 2017 11:32 UTC
7 points
36 comments4 min readLW link

Hu­mans can be as­signed any val­ues what­so­ever...

Stuart_Armstrong13 Oct 2017 11:29 UTC
16 points
6 comments4 min readLW link

Johnathon Blow—In­tro­spec­tion tech­niques for deal­ing with lack of mo­ti­va­tion, malaise, depression

nBrown13 Oct 2017 7:21 UTC
1 point
0 comments1 min readLW link
(www.youtube.com)

Univer­sal Paperclips

Morendil13 Oct 2017 6:16 UTC
2 points
0 comments1 min readLW link
(decisionproblem.com)

SSC Meetup: Bay Area 10/​14

Scott Alexander13 Oct 2017 3:30 UTC
4 points
0 comments1 min readLW link
(slatestarcodex.com)

LesserWrong is now de facto the main site

Chris_Leong13 Oct 2017 1:19 UTC
15 points
4 comments1 min readLW link

In­stru­men­tal Ra­tion­al­ity 4.3: Break­ing Habits and Conclusion

lifelonglearner12 Oct 2017 23:11 UTC
6 points
7 comments13 min readLW link

Alan Kay—Pro­gram­ming and Scaling

namespace12 Oct 2017 14:46 UTC
6 points
2 comments1 min readLW link
(www.youtube.com)

Beauty as a sig­nal (map)

turchin12 Oct 2017 10:02 UTC
4 points
1 comment1 min readLW link

Sig­nal seeding

KatjaGrace12 Oct 2017 7:00 UTC
8 points
5 comments1 min readLW link
(meteuphoric.wordpress.com)

In­stru­men­tal Ra­tion­al­ity 4.2: Creat­ing Habits

lifelonglearner12 Oct 2017 2:25 UTC
13 points
0 comments17 min readLW link

Gnos­tic Rationality

Gordon Seidoh Worley11 Oct 2017 21:44 UTC
19 points
40 comments3 min readLW link

Events section

the gears to ascension11 Oct 2017 16:24 UTC
2 points
6 comments1 min readLW link

Mini-con­fer­ence “Near-term AI safety”

turchin11 Oct 2017 15:19 UTC
5 points
1 comment1 min readLW link

Mini-con­fer­ence “Near-term AI safety”

avturchin11 Oct 2017 14:54 UTC
2 points
3 comments1 min readLW link

Win­ning is for Losers

Jacob Falkovich11 Oct 2017 4:01 UTC
31 points
12 comments18 min readLW link
(putanumonit.com)

Cen­tral­i­sa­tion shapes Ideal­ists into Cynics

whpearson10 Oct 2017 20:01 UTC
−2 points
11 comments3 min readLW link

10/​10/​2017: Devel­op­ment up­date (“au­tosav­ing” & In­ter­com op­tions)

habryka10 Oct 2017 19:15 UTC
2 points
2 comments1 min readLW link

What would con­vince you you’d won the lot­tery?

Stuart_Armstrong10 Oct 2017 13:45 UTC
28 points
11 comments4 min readLW link

Toy model of the AI con­trol prob­lem: an­i­mated version

Stuart_Armstrong10 Oct 2017 11:12 UTC
11 points
2 comments1 min readLW link

Toy model of the AI con­trol prob­lem: an­i­mated version

Stuart_Armstrong10 Oct 2017 11:06 UTC
23 points
8 comments1 min readLW link

Ro­bust­ness as a Path to AI Alignment

abramdemski10 Oct 2017 8:14 UTC
45 points
9 comments9 min readLW link

Distinc­tions in Types of Thought

sarahconstantin10 Oct 2017 3:36 UTC
37 points
24 comments13 min readLW link

Ar­tifi­cial Unintelligence

Gordon Seidoh Worley10 Oct 2017 1:37 UTC
0 points
0 comments1 min readLW link
(entirelyuseless.wordpress.com)

Build­ing a Com­mu­nity In­sti­tu­tion In Five Hours A Week

spiralingintocontrol9 Oct 2017 21:12 UTC
10 points
0 comments1 min readLW link
(particularvirtue.blogspot.com)

Learnday

hamnox9 Oct 2017 19:15 UTC
8 points
1 comment2 min readLW link

Ini­tial thoughts on as­sisted for­mat­ting of dis­cus­sion posts

Ezra9 Oct 2017 18:46 UTC
3 points
4 comments1 min readLW link

The Rec­og­niz­ing vs Gen­er­at­ing Distinction

lifelonglearner9 Oct 2017 16:56 UTC
6 points
2 comments1 min readLW link
(mapandterritory.org)

The Just World Hypothesis

michael_vassar29 Oct 2017 6:03 UTC
17 points
9 comments3 min readLW link

Dou­ble Crux Ex­am­ple: Should HPMOR be on the Front Page?

Raemon9 Oct 2017 3:50 UTC
19 points
12 comments18 min readLW link

Com­mu­nity Capital

weft9 Oct 2017 3:49 UTC
35 points
15 comments3 min readLW link

Per­sonal Model of So­cial Energy

PDV9 Oct 2017 3:14 UTC
2 points
1 comment1 min readLW link

In­stru­men­tal Ra­tion­al­ity 4.1: Model­ing Habits

lifelonglearner9 Oct 2017 1:21 UTC
11 points
2 comments10 min readLW link