Spoiler-Free Re­view: Witcher 3: Wild Hunt (plus a Spoiler­ific sec­tion)

ZviJul 4, 2020, 11:10 PM
14 points
2 comments17 min readLW link
(thezvi.wordpress.com)

Clas­sify­ing games like the Pri­soner’s Dilemma

philhJul 4, 2020, 5:10 PM
111 points
28 comments6 min readLW link1 review
(reasonableapproximation.net)

Causal­ity and its harms

George3d6Jul 4, 2020, 2:42 PM
16 points
19 comments11 min readLW link
(blog.cerebralab.com)

Trade­off be­tween de­sir­able prop­er­ties for baseline choices in im­pact measures

VikaJul 4, 2020, 11:56 AM
37 points
24 comments5 min readLW link

AI-Feyn­man as a bench­mark for what we should be aiming for

Faustus2Jul 4, 2020, 9:24 AM
8 points
1 comment2 min readLW link

[Question] Repli­cated cog­ni­tive bias list?

moderockJul 4, 2020, 4:15 AM
12 points
3 comments1 min readLW link

Let There be Sound: A Fris­to­nian Med­i­ta­tion on Creativity

jollybardJul 4, 2020, 3:33 AM
3 points
2 commentsLW link
(jollybard.wordpress.com)

The silence is deaf­en­ing – Devon Zuegel

Ben PaceJul 4, 2020, 2:30 AM
27 points
12 comments1 min readLW link
(devonzuegel.com)

Site Redesign Feed­back Requested

RaemonJul 3, 2020, 10:28 PM
46 points
14 comments1 min readLW link

AI Un­safety via Non-Zero-Sum Debate

VojtaKovarikJul 3, 2020, 10:03 PM
25 points
10 comments5 min readLW link

[Question] If some­one you loved was ex­pe­rienc­ing un­remit­ting suffer­ing (re­lated to a con­stel­la­tion of multi-di­men­sional fac­tors and pro­cesses, those of which in­clude anoma­lous states of con­scious­ness and an ia­tro­genic men­tal health sys­tem), what would you think and what would you do?

Sara CJul 3, 2020, 9:02 PM
20 points
6 comments4 min readLW link

[Crowd­fund­ing] LessWrong podcast

Mati_RoyJul 3, 2020, 8:59 PM
9 points
6 comments1 min readLW link

High Stock Prices Make Sense Right Now

johnswentworthJul 3, 2020, 8:16 PM
83 points
27 comments4 min readLW link

Split­ting De­bate up into Two Subsystems

NandiJul 3, 2020, 8:11 PM
13 points
5 comments4 min readLW link

Re­search ideas to study hu­mans with AI Safety in mind

Riccardo VolpatoJul 3, 2020, 4:01 PM
23 points
2 comments5 min readLW link

Poly Do­mes­tic Partnerships

jefftkJul 3, 2020, 2:10 PM
18 points
4 comments2 min readLW link
(www.jefftk.com)

The Book of HPMOR Fanfics

Mati_RoyJul 3, 2020, 1:32 PM
41 points
18 comments1 min readLW link

Open & Wel­come Thread—July 2020

habrykaJul 2, 2020, 10:41 PM
15 points
80 comments1 min readLW link

The alle­gory of the hospital

Sunny from QADJul 2, 2020, 9:46 PM
5 points
20 comments2 min readLW link
(questionsanddaylight.com)

Covid 7/​2: It Could Be Worse

ZviJul 2, 2020, 8:20 PM
85 points
13 comments16 min readLW link
(thezvi.wordpress.com)

[Question] How to de­cide to get a nose­job or not?

snog toddgrassJul 2, 2020, 5:54 PM
3 points
9 comments1 min readLW link

Goals and short descriptions

Michele CampoloJul 2, 2020, 5:41 PM
14 points
8 comments5 min readLW link

[Question] Does the Berkeley Ex­is­ten­tial Risk Ini­ti­a­tive (self-)iden­tify as an EA-al­igned or­ga­ni­za­tion?

Evan_GaensbauerJul 2, 2020, 5:38 PM
10 points
10 comments1 min readLW link

June 2020 gw­ern.net newsletter

gwernJul 2, 2020, 2:19 PM
16 points
0 commentsLW link
(www.gwern.net)

The “AI De­bate” Debate

michaelcohenJul 2, 2020, 10:16 AM
20 points
20 comments3 min readLW link

Cambridge Vir­tual LW/​SSC Meetup

NoSignalNoNoiseJul 2, 2020, 3:45 AM
6 points
0 comments1 min readLW link

Noise on the Channel

abramdemskiJul 2, 2020, 1:58 AM
31 points
8 comments10 min readLW link

[Question] Non offen­sive word for peo­ple who are not sin­gle-mag­is­terium-Bayes thinkers

Tim LiptrotJul 1, 2020, 10:33 PM
3 points
18 comments1 min readLW link

Se­cond Wave Covid Deaths?

jefftkJul 1, 2020, 8:40 PM
35 points
16 comments2 min readLW link
(www.jefftk.com)

[Question] Harry Pot­ter and meth­ods of ra­tio­nal­ity al­ter­na­tive end­ing.

Klen SalubriJul 1, 2020, 6:51 PM
13 points
25 comments1 min readLW link

[Question] What’s the most easy, fast, effi­cient way to cre­ate and main­tain a per­sonal Blog?

Ferdinand CachoeiraJul 1, 2020, 6:51 PM
2 points
5 comments1 min readLW link

Se­cond-Order Ex­is­ten­tial Risk

IdeopunkJul 1, 2020, 6:46 PM
2 points
1 comment3 min readLW link

How to Find Sources in an Un­re­li­able World

ElizabethJul 1, 2020, 6:30 PM
41 points
8 comments2 min readLW link
(acesounderglass.com)

Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

Palus AstraJul 1, 2020, 5:30 PM
35 points
4 comments67 min readLW link

[AN #106]: Eval­u­at­ing gen­er­al­iza­tion abil­ity of learned re­ward models

Rohin ShahJul 1, 2020, 5:20 PM
14 points
2 comments11 min readLW link
(mailchi.mp)

PSA: Cars don’t have ‘blindspots’

romeostevensitJul 1, 2020, 5:04 PM
37 points
5 comments1 min readLW link

Fore­cast­ing Newslet­ter. June 2020.

NunoSempereJul 1, 2020, 9:46 AM
27 points
0 comments8 min readLW link

Invit­ing Cu­rated Authors to Give 5-Min On­line Talks

Ben PaceJul 1, 2020, 1:05 AM
27 points
6 comments1 min readLW link

Si­tu­at­ing LessWrong in con­tem­po­rary philos­o­phy: An in­ter­view with Jon Livengood

Suspended ReasonJul 1, 2020, 12:37 AM
117 points
21 comments19 min readLW link

AvE: As­sis­tance via Empowerment

FactorialCodeJun 30, 2020, 10:07 PM
12 points
1 comment1 min readLW link
(arxiv.org)

I am Bad at Flirt­ing; Real­iz­ing that by Notic­ing Confusion

snog toddgrassJun 30, 2020, 8:05 PM
8 points
7 comments4 min readLW link

Com­par­ing AI Align­ment Ap­proaches to Min­i­mize False Pos­i­tive Risk

Gordon Seidoh WorleyJun 30, 2020, 7:34 PM
5 points
0 comments9 min readLW link

[Question] How ought I spend time?

QuinnJun 30, 2020, 4:53 PM
10 points
11 comments1 min readLW link

Sick of struggling

dqups1Jun 30, 2020, 4:47 PM
2 points
7 comments1 min readLW link

Somerville Mask Usage

jefftkJun 30, 2020, 2:50 PM
17 points
4 comments1 min readLW link
(www.jefftk.com)

Web AI dis­cus­sion Groups

Donald HobsonJun 30, 2020, 11:22 AM
11 points
0 comments2 min readLW link

How do take­off speeds af­fect the prob­a­bil­ity of bad out­comes from AGI?

KRJun 29, 2020, 10:06 PM
15 points
2 comments8 min readLW link

AI Benefits Post 2: How AI Benefits Differs from AI Align­ment & AI for Good

CullenJun 29, 2020, 5:00 PM
8 points
7 comments2 min readLW link

Thoughts as open tabs

ryan wongJun 29, 2020, 12:13 PM
17 points
12 comments2 min readLW link

Op­ti­mized Pro­pa­ganda with Bayesian Net­works: Com­ment on “Ar­tic­u­lat­ing Lay The­o­ries Through Graph­i­cal Models”

Zack_M_DavisJun 29, 2020, 2:45 AM
105 points
10 comments4 min readLW link