set of cards

November HMar 5, 2018, 10:37 PM
7 points
3 comments1 min readLW link

An­cient So­cial Pat­terns: Comitatus

ryan_bMar 5, 2018, 6:28 PM
8 points
0 comments2 min readLW link

Take­off Speed: Sim­ple Asymp­totics in a Toy Model.

Aaron RothMar 5, 2018, 5:07 PM
21 points
21 comments9 min readLW link
(aaronsadventures.blogspot.com)

Mur­phy’s Quest Ch 4: Notic­ing Confusion

alkjashMar 5, 2018, 7:20 AM
21 points
8 comments3 min readLW link
(radimentary.wordpress.com)

God Help Us, Let’s Try To Un­der­stand Fris­ton On Free Energy

Scott AlexanderMar 5, 2018, 6:00 AM
54 points
43 comments14 min readLW link
(slatestarcodex.com)

Ex­pla­na­tion of Paul’s AI-Align­ment agenda by Ajeya Cotra

habrykaMar 5, 2018, 3:10 AM
20 points
0 comments1 min readLW link
(ai-alignment.com)

Mur­phy’s Quest Ch 3: Murphyjitsu

alkjashMar 5, 2018, 2:40 AM
18 points
0 comments3 min readLW link
(radimentary.wordpress.com)

Ar­gu­ment, in­tu­ition, and recursion

paulfchristianoMar 5, 2018, 1:37 AM
44 points
13 comments9 min readLW link1 review

On Defense Mechanisms

QuaerendoMar 4, 2018, 6:53 PM
25 points
22 comments3 min readLW link

Cor­rect Models Are Bad

sapphireMar 4, 2018, 4:51 PM
2 points
6 comments1 min readLW link
(brianlui.dog)

[Paper] Sur­viv­ing global risks through the preser­va­tion of hu­man­ity’s data on the Moon

avturchinMar 4, 2018, 7:07 AM
4 points
5 comments1 min readLW link

Beta-Beta – Re­cent Discussion

RadminMar 4, 2018, 5:34 AM
12 points
6 comments1 min readLW link

Mur­phy’s Quest Ch 1: Ex­po­sure Therapy

alkjashMar 4, 2018, 4:50 AM
26 points
5 comments3 min readLW link
(radimentary.wordpress.com)

Mur­phy’s Quest Ch 2: Empiricism

alkjashMar 4, 2018, 4:50 AM
25 points
4 comments5 min readLW link
(radimentary.wordpress.com)

Is there a Con­nec­tion Between Great­ness in Math and Philos­o­phy?

Scott GarrabrantMar 3, 2018, 11:25 PM
13 points
6 comments1 min readLW link

Fund­ing for AI al­ign­ment research

paulfchristianoMar 3, 2018, 9:52 PM
39 points
24 comments1 min readLW link
(docs.google.com)

Fund­ing for in­de­pen­dent AI al­ign­ment research

paulfchristianoMar 3, 2018, 9:44 PM
4 points
1 comment1 min readLW link
(docs.google.com)

The Jor­dan Peter­son Mask

Jacob FalkovichMar 3, 2018, 7:49 PM
61 points
154 comments12 min readLW link

Sa­cred Cash

ZviMar 3, 2018, 1:20 PM
29 points
29 comments3 min readLW link
(thezvi.wordpress.com)

In­ter­ac­tive Bayes The­o­rem Visualization

Allen KimMar 3, 2018, 4:00 AM
15 points
5 comments1 min readLW link
(allenkim67.github.io)

Monthly Meta: Com­mon Knowledge

Chris_LeongMar 3, 2018, 12:05 AM
17 points
14 comments1 min readLW link

So­cial Technology

Samo BurjaMar 2, 2018, 7:54 PM
13 points
3 comments11 min readLW link

Im­proved re­gret bound for DRL

Vanessa KosoyMar 2, 2018, 12:49 PM
0 points
0 comments1 min readLW link

March 2018 Me­dia Thread

ArisKatsarisMar 2, 2018, 1:06 AM
1 point
15 comments1 min readLW link

Quick Nate/​Eliezer com­ments on discontinuity

Rob BensingerMar 1, 2018, 10:03 PM
44 points
1 comment2 min readLW link

Ham­mer­time In­ter­mis­sion #2

alkjashMar 1, 2018, 6:20 PM
23 points
7 comments1 min readLW link
(radimentary.wordpress.com)

Ms. Blue, meet Mr. Green

alexeiMar 1, 2018, 1:43 PM
43 points
30 comments12 min readLW link

Beyond al­gorith­mic equiv­alence: self-modelling

Stuart_ArmstrongMar 1, 2018, 1:31 PM
0 points
0 comments1 min readLW link
(www.lesserwrong.com)

Beyond al­gorith­mic equiv­alence: al­gorith­mic noise

Stuart_ArmstrongMar 1, 2018, 1:31 PM
0 points
0 comments1 min readLW link
(www.lesserwrong.com)

Friendship

alkjashMar 1, 2018, 6:00 AM
24 points
16 comments3 min readLW link
(radimentary.wordpress.com)

Kid­neys, trade, sa­cred­ness, and space travel

BenquoMar 1, 2018, 5:20 AM
15 points
34 comments3 min readLW link
(benjaminrosshoffman.com)

Am­bi­guity Detection

TurnTroutMar 1, 2018, 4:23 AM
11 points
8 comments4 min readLW link

Ex­tended Quote on the In­sti­tu­tion of Academia

Ben PaceMar 1, 2018, 2:58 AM
54 points
23 comments9 min readLW link

How safe “safe” AI de­vel­op­ment?

Gordon Seidoh WorleyFeb 28, 2018, 11:21 PM
9 points
1 comment1 min readLW link

Beyond al­gorith­mic equiv­alence: self-modelling

Stuart_ArmstrongFeb 28, 2018, 4:55 PM
10 points
3 comments1 min readLW link

Beyond al­gorith­mic equiv­alence: al­gorith­mic noise

Stuart_ArmstrongFeb 28, 2018, 4:55 PM
10 points
4 comments2 min readLW link

Us­ing the uni­ver­sal prior for log­i­cal un­cer­tainty (re­tracted)

cousin_itFeb 28, 2018, 1:07 PM
15 points
13 comments2 min readLW link

2/​27/​08 Up­date – Front­page 3.0

RaemonFeb 28, 2018, 6:26 AM
15 points
21 comments1 min readLW link

TDT for Humans

alkjashFeb 28, 2018, 5:40 AM
26 points
7 comments5 min readLW link
(radimentary.wordpress.com)

Set Up for Suc­cess: In­sights from ‘Naïve Set The­ory’

TurnTroutFeb 28, 2018, 2:01 AM
31 points
40 comments3 min readLW link

In­tu­ition should be ap­plied at the low­est pos­si­ble level

Rafael HarthFeb 27, 2018, 10:58 PM
10 points
9 comments1 min readLW link

The sad state of Ra­tion­al­ity Zürich—Effec­tive Altru­ism Zürich included

rolandFeb 27, 2018, 2:51 PM
−8 points
50 comments3 min readLW link

The worst trol­ley prob­lem in the world

CronoDASFeb 27, 2018, 3:56 AM
1 point
1 comment1 min readLW link

Cat­e­gories of Sacredness

ZviFeb 27, 2018, 2:00 AM
26 points
35 comments8 min readLW link
(thezvi.wordpress.com)

More on the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Prior

AlexMennenFeb 26, 2018, 11:53 PM
16 points
4 comments9 min readLW link

Goal Factoring

alkjashFeb 26, 2018, 11:30 PM
27 points
4 comments2 min readLW link
(radimentary.wordpress.com)

In­con­ve­nience Is Qual­i­ta­tively Bad

AlicornFeb 26, 2018, 11:27 PM
83 points
52 comments2 min readLW link

The Ham­ming Prob­lem of Group Rationality

PDVFeb 26, 2018, 6:59 PM
6 points
36 comments1 min readLW link

Focusing

alkjashFeb 26, 2018, 6:10 AM
20 points
22 comments3 min readLW link
(radimentary.wordpress.com)

Map­ping the Archipelago

alkjashFeb 26, 2018, 5:09 AM
14 points
24 comments1 min readLW link