High-pre­ci­sion claims may be re­futed with­out be­ing re­placed with other high-pre­ci­sion claims

jessicata30 Jan 2020 23:08 UTC
55 points
30 comments3 min readLW link
(unstableontology.com)

[Question] how has this fo­rum changed your life?

Jon Ronson30 Jan 2020 21:54 UTC
26 points
20 comments1 min readLW link

Ar­tifi­cial In­tel­li­gence, Values and Alignment

Gordon Seidoh Worley30 Jan 2020 19:48 UTC
13 points
1 comment1 min readLW link
(deepmind.com)

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben Pace30 Jan 2020 5:07 UTC
37 points
9 comments27 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
20 points
3 comments14 min readLW link

Towards de­con­fus­ing values

Gordon Seidoh Worley29 Jan 2020 19:28 UTC
12 points
4 comments7 min readLW link

[AN #84] Re­view­ing AI al­ign­ment work in 2018-19

Rohin Shah29 Jan 2020 18:30 UTC
23 points
0 comments6 min readLW link
(mailchi.mp)

Slide deck: In­tro­duc­tion to AI Safety

Aryeh Englander29 Jan 2020 15:57 UTC
23 points
0 comments1 min readLW link
(drive.google.com)

The Skewed and the Screwed: When Mat­ing Meets Politics

Jacob Falkovich29 Jan 2020 15:50 UTC
62 points
6 comments15 min readLW link1 review

Twin Cities Meetup 2/​1/​20: Es­ti­ma­tion, Col­lec­tive Wis­dom, and Sur­vey Data

Drake Thomas29 Jan 2020 14:55 UTC
1 point
0 comments1 min readLW link

TAISU—Tech­ni­cal AI Safety Unconference

Linda Linsefors29 Jan 2020 13:31 UTC
10 points
4 comments2 min readLW link

Mod No­tice about Elec­tion Discussion

Vaniver29 Jan 2020 1:35 UTC
61 points
16 comments1 min readLW link

Sur­vival and Flour­ish­ing grant ap­pli­ca­tions open un­til March 7th ($0.8MM-$1.5MM planned for dis­per­sal)

habryka28 Jan 2020 23:36 UTC
18 points
0 comments1 min readLW link

Po­ten­tial Ways to Fight Mazes

Zvi28 Jan 2020 22:50 UTC
53 points
9 comments18 min readLW link
(thezvi.wordpress.com)

Drain­ing the swamp

jasoncrawford28 Jan 2020 21:37 UTC
99 points
1 comment11 min readLW link
(rootsofprogress.org)

[Question] Hello, is it you I’m look­ing for?

Knuckels McGinty28 Jan 2020 20:56 UTC
9 points
15 comments1 min readLW link

Us­ing vec­tor fields to vi­su­al­ise prefer­ences and make them consistent

28 Jan 2020 19:44 UTC
42 points
32 comments11 min readLW link

If Van der Waals was a neu­ral network

George3d628 Jan 2020 18:38 UTC
18 points
3 comments11 min readLW link
(blog.cerebralab.com)

As­sor­ta­tive Mat­ing And Autism

Scott Alexander28 Jan 2020 18:20 UTC
50 points
2 comments4 min readLW link
(slatestarcodex.com)

[Question] Al­gorithms vs Compute

johnswentworth28 Jan 2020 17:34 UTC
26 points
11 comments1 min readLW link

Ap­pendix: how a sub­agent could get powerful

Stuart_Armstrong28 Jan 2020 15:28 UTC
53 points
14 comments4 min readLW link

Ra­tion­al­ist prep­per thread

avturchin28 Jan 2020 13:42 UTC
29 points
15 comments1 min readLW link

Cam­bridge LW/​SSC Meetup: Pre­dic­tion Training

NoSignalNoNoise28 Jan 2020 5:25 UTC
6 points
0 comments1 min readLW link

An Emp­istem­i­cally Ra­tional Superbowl

NoSignalNoNoise28 Jan 2020 4:05 UTC
8 points
1 comment1 min readLW link

AI Align­ment 2018-19 Review

Rohin Shah28 Jan 2020 2:19 UTC
126 points
6 comments35 min readLW link

Re­view: How to Read a Book (Mor­timer Adler, Charles Van Doren)

Elizabeth27 Jan 2020 21:10 UTC
48 points
8 comments3 min readLW link
(acesounderglass.com)

[Question] What re­search has been done on the al­tru­is­tic im­pact of the usual good ac­tions?

Alexei27 Jan 2020 19:33 UTC
11 points
6 comments1 min readLW link

Heal­ing vs. ex­er­cise analo­gies for emo­tional work

Kaj_Sotala27 Jan 2020 19:10 UTC
48 points
8 comments2 min readLW link
(kajsotala.fi)

The Ben­tham Prize at Metaculus

AABoyles27 Jan 2020 14:27 UTC
28 points
4 comments1 min readLW link
(www.metaculus.com)

Port­ing My Rhythm Setup

jefftk26 Jan 2020 21:20 UTC
9 points
0 comments3 min readLW link
(www.jefftk.com)

UML VIII: Lin­ear Pre­dic­tors (2)

Rafael Harth26 Jan 2020 20:09 UTC
9 points
2 comments10 min readLW link

[Question] What hap­pens if we re­verse New­comb’s Para­dox and re­place it with two nega­tive sums? Doesn’t it kinda maybe af­firm Roko’s Basilisk?

Habby26 Jan 2020 18:08 UTC
1 point
3 comments2 min readLW link

On hid­ing the source of knowledge

jessicata26 Jan 2020 2:48 UTC
115 points
40 comments3 min readLW link
(unstableontology.com)

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
98 points
22 comments2 min readLW link
(sideways-view.com)

Mo­ral pub­lic goods

paulfchristiano26 Jan 2020 0:10 UTC
147 points
74 comments4 min readLW link
(sideways-view.com)

Co­or­di­na­tion as a Scarce Resource

johnswentworth25 Jan 2020 23:32 UTC
251 points
22 comments4 min readLW link2 reviews

[Question] Are the bad epistemic con­di­tions global?

jmh25 Jan 2020 23:31 UTC
18 points
1 comment1 min readLW link

Ma­te­rial Goods as an Abun­dant Resource

johnswentworth25 Jan 2020 23:23 UTC
81 points
10 comments5 min readLW link

Con­straints & Slack­ness as a Wor­ld­view Generator

johnswentworth25 Jan 2020 23:18 UTC
55 points
4 comments4 min readLW link

Tech­nol­ogy Changes Constraints

johnswentworth25 Jan 2020 23:13 UTC
116 points
6 comments4 min readLW link

SSC Zürich Fe­bru­ary Meetup

Vitor25 Jan 2020 17:21 UTC
2 points
0 comments1 min readLW link

Ten Causes of Mazedom

Zvi25 Jan 2020 13:40 UTC
59 points
6 comments22 min readLW link
(thezvi.wordpress.com)

On the on­tolog­i­cal de­vel­op­ment of consciousness

jessicata25 Jan 2020 5:56 UTC
51 points
7 comments4 min readLW link
(unstableontology.com)

[Question] Have epistemic con­di­tions always been this bad?

Wei Dai25 Jan 2020 4:42 UTC
210 points
106 comments4 min readLW link1 review

Cam­bridge Pre­dic­tion Game

NoSignalNoNoise25 Jan 2020 3:57 UTC
13 points
3 comments2 min readLW link

SSC Hal­i­fax Meetup—Jan­uary 25

interstice25 Jan 2020 1:15 UTC
4 points
0 comments1 min readLW link

Li­tany Against Anger

namespace25 Jan 2020 0:56 UTC
13 points
2 comments1 min readLW link

AI al­ign­ment con­cepts: philo­soph­i­cal break­ers, stop­pers, and distorters

JustinShovelain24 Jan 2020 19:23 UTC
20 points
3 comments3 min readLW link

The two-layer model of hu­man val­ues, and prob­lems with syn­the­siz­ing preferences

Kaj_Sotala24 Jan 2020 15:17 UTC
70 points
16 comments9 min readLW link

[Question] How much do we know about how brains learn?

Kenny24 Jan 2020 14:46 UTC
8 points
0 comments1 min readLW link