Cas­sette Tape Thoughts

ElizabethJan 22, 2020, 10:50 PM
48 points
0 comments2 min readLW link
(acesounderglass.com)

Why Do You Keep Hav­ing This Prob­lem?

Davis_KingsleyJan 20, 2020, 8:33 AM
47 points
16 comments1 min readLW link

Un­der­ap­pre­ci­ated points about util­ity func­tions (of both sorts)

SniffnoyJan 4, 2020, 7:27 AM
47 points
61 comments15 min readLW link

What we Know vs. How we Know it?

ElizabethJan 6, 2020, 12:30 AM
46 points
0 comments2 min readLW link
(acesounderglass.com)

Pre­dic­tors ex­ist: CDT go­ing bonkers… forever

Stuart_ArmstrongJan 14, 2020, 4:19 PM
46 points
31 comments1 min readLW link

On Be­ing Robust

TurnTroutJan 10, 2020, 3:51 AM
45 points
7 comments2 min readLW link

Key De­ci­sion Anal­y­sis—a fun­da­men­tal ra­tio­nal­ity technique

Eli TyreJan 12, 2020, 5:59 AM
44 points
10 comments3 min readLW link

Mal­ign gen­er­al­iza­tion with­out in­ter­nal search

Matthew BarnettJan 12, 2020, 6:03 PM
43 points
12 comments4 min readLW link

Us­ing vec­tor fields to vi­su­al­ise prefer­ences and make them consistent

Jan 28, 2020, 7:44 PM
42 points
32 comments11 min readLW link

Char­ac­ter­is­ing utopia

Richard_NgoJan 2, 2020, 12:00 AM
40 points
5 comments22 min readLW link
(thinkingcomplete.blogspot.com)

In Defense of the Arms Races… that End Arms Races

GentzelJan 15, 2020, 9:30 PM
38 points
9 comments3 min readLW link
(theconsequentialist.wordpress.com)

Book re­view: Hu­man Compatible

PeterMcCluskeyJan 19, 2020, 3:32 AM
37 points
2 comments5 min readLW link
(www.bayesianinvestor.com)

[Question] Since figur­ing out hu­man val­ues is hard, what about, say, mon­key val­ues?

ShmiJan 1, 2020, 9:56 PM
37 points
13 comments1 min readLW link

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben PaceJan 30, 2020, 5:07 AM
37 points
9 comments27 min readLW link

Log­i­cal Rep­re­sen­ta­tion of Causal Models

johnswentworthJan 21, 2020, 8:04 PM
37 points
0 comments3 min readLW link

Ex­plor­ing safe exploration

evhubJan 6, 2020, 9:07 PM
37 points
8 comments3 min readLW link

Does GPT-2 Un­der­stand Any­thing?

Douglas Summers-StayJan 2, 2020, 5:09 PM
37 points
23 comments5 min readLW link

[AN #80]: Why AI risk might be solved with­out ad­di­tional in­ter­ven­tion from longtermists

Rohin ShahJan 2, 2020, 6:20 PM
36 points
95 comments10 min readLW link
(mailchi.mp)

Are “su­perfore­cast­ers” a real phe­nomenon?

reallyeliJan 9, 2020, 1:23 AM
36 points
29 comments1 min readLW link

For­mu­lat­ing Re­duc­tive Agency in Causal Models

johnswentworthJan 23, 2020, 5:03 PM
33 points
0 comments2 min readLW link

[AN #81]: Univer­sal­ity as a po­ten­tial solu­tion to con­cep­tual difficul­ties in in­tent alignment

Rohin ShahJan 8, 2020, 6:00 PM
32 points
4 comments11 min readLW link
(mailchi.mp)

Dis­solv­ing Con­fu­sion around Func­tional De­ci­sion Theory

scasperJan 5, 2020, 6:38 AM
32 points
24 comments9 min readLW link

Study­ing Early Stage Science: Re­search Pro­gram Introduction

habrykaJan 17, 2020, 10:12 PM
32 points
1 comment15 min readLW link
(medium.com)

10 posts I like in the 2018 Review

Ben PaceJan 11, 2020, 2:23 AM
31 points
0 comments2 min readLW link

The Align­ment-Com­pe­tence Trade-Off, Part 1: Coal­i­tion Size and Sig­nal­ing Costs

GentzelJan 15, 2020, 11:10 PM
30 points
4 comments3 min readLW link
(theconsequentialist.wordpress.com)

Why a New Ra­tion­al­iza­tion Se­quence?

dspeyerJan 13, 2020, 6:46 AM
30 points
8 comments3 min readLW link

[Question] Use-cases for com­pu­ta­tions, other than run­ning them?

johnswentworthJan 19, 2020, 8:52 PM
30 points
6 comments2 min readLW link

Defi­ni­tions of Causal Ab­strac­tion: Re­view­ing Beck­ers & Halpern

johnswentworthJan 7, 2020, 12:03 AM
30 points
4 comments4 min readLW link

Up­date on Ought’s ex­per­i­ments on fac­tored eval­u­a­tion of arguments

Owain_EvansJan 12, 2020, 9:20 PM
29 points
1 comment1 min readLW link
(ought.org)

Ra­tion­al­ist prep­per thread

avturchinJan 28, 2020, 1:42 PM
29 points
15 comments1 min readLW link

The Ben­tham Prize at Metaculus

AABoylesJan 27, 2020, 2:27 PM
28 points
4 comments1 min readLW link
(www.metaculus.com)

Pre­dic­tive cod­ing & depression

Steven ByrnesJan 3, 2020, 2:38 AM
27 points
9 comments7 min readLW link

(Dou­ble-)In­verse Embed­ded Agency Problem

ShmiJan 8, 2020, 4:30 AM
27 points
8 comments2 min readLW link

Book Re­view—The Ori­gins of Un­fair­ness: So­cial Cat­e­gories and Cul­tural Evolution

Zack_M_DavisJan 21, 2020, 6:28 AM
27 points
5 comments1 min readLW link
(unremediatedgender.space)

[Question] Al­gorithms vs Compute

johnswentworthJan 28, 2020, 5:34 PM
26 points
11 comments1 min readLW link

In­ner al­ign­ment re­quires mak­ing as­sump­tions about hu­man values

Matthew BarnettJan 20, 2020, 6:38 PM
26 points
9 comments4 min readLW link

[Question] how has this fo­rum changed your life?

Jon RonsonJan 30, 2020, 9:54 PM
26 points
20 comments1 min readLW link

[Question] How would we check if “Math­e­mat­i­ci­ans are gen­er­ally more Law Abid­ing?”

RaemonJan 12, 2020, 8:23 PM
26 points
4 comments1 min readLW link

Mo­ral un­cer­tainty vs re­lated concepts

MichaelAJan 11, 2020, 10:03 AM
26 points
13 comments16 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelAJan 7, 2020, 10:47 AM
26 points
17 comments8 min readLW link

Red Flags for Rationalization

dspeyerJan 14, 2020, 7:34 AM
25 points
6 comments4 min readLW link

Offer of co-authorship

Vanessa KosoyJan 10, 2020, 5:44 PM
25 points
1 comment1 min readLW link

Outer al­ign­ment and imi­ta­tive amplification

evhubJan 10, 2020, 12:26 AM
24 points
11 comments9 min readLW link

Slide deck: In­tro­duc­tion to AI Safety

Aryeh EnglanderJan 29, 2020, 3:57 PM
24 points
0 comments1 min readLW link
(drive.google.com)

[Question] The­ory of Causal Models with Dy­namic Struc­ture?

johnswentworthJan 23, 2020, 7:47 PM
24 points
6 comments1 min readLW link

New pa­per: The In­cen­tives that Shape Behaviour

RyanCareyJan 23, 2020, 7:07 PM
23 points
5 comments1 min readLW link
(arxiv.org)

[AN #84] Re­view­ing AI al­ign­ment work in 2018-19

Rohin ShahJan 29, 2020, 6:30 PM
23 points
0 comments6 min readLW link
(mailchi.mp)

[Question] Is it worth­while to save the cord blood and tis­sue?

AlexeiJan 11, 2020, 9:52 PM
22 points
7 comments1 min readLW link

Is back­wards cau­sa­tion nec­es­sar­ily ab­surd?

Chris_LeongJan 14, 2020, 7:25 PM
22 points
9 comments1 min readLW link

Less Wrong Poetry Corner: Walter Raleigh’s “The Lie”

Zack_M_DavisJan 4, 2020, 10:22 PM
22 points
17 comments3 min readLW link