Com­pe­ti­tion for Power

Samo BurjaApr 4, 2018, 5:10 PM
17 points
8 comments9 min readLW link
(medium.com)

Real­is­tic thought experiments

KatjaGraceApr 4, 2018, 1:50 AM
26 points
8 comments1 min readLW link
(meteuphoric.wordpress.com)

HPMoE 3

alkjashApr 4, 2018, 1:00 AM
4 points
0 comments1 min readLW link
(radimentary.wordpress.com)

An Ar­gu­ment For Pri­ori­tiz­ing: “Pos­i­tively Shap­ing the Devel­op­ment of Crypto-as­sets”

rhys_lindmarkApr 3, 2018, 10:12 PM
3 points
1 commentLW link
(effective-altruism.com)

Speci­fi­ca­tion gam­ing ex­am­ples in AI

VikaApr 3, 2018, 12:30 PM
47 points
9 comments1 min readLW link2 reviews

[Draft for com­ment­ing] Near-Term AI risks predictions

avturchinApr 3, 2018, 10:29 AM
6 points
6 comments1 min readLW link

Uny­ield­ing Yoda Timers: Tak­ing the Ham­mer­time Fi­nal Exam

TurnTroutApr 3, 2018, 2:38 AM
16 points
3 comments1 min readLW link

Mus­ings on Exploration

DiffractorApr 3, 2018, 2:15 AM
1 point
4 comments6 min readLW link

Suffer­ing and In­tractable Pain

Gordon Seidoh WorleyApr 3, 2018, 1:05 AM
11 points
4 comments7 min readLW link
(mapandterritory.org)

Why Karma 2.0? (A Kab­bal­is­tic Ex­pla­na­tion)

Ben PaceApr 2, 2018, 8:43 PM
15 points
1 comment1 min readLW link

Brno: Far fu­ture, ex­is­ten­tial risk and AI safety

Jan_KulveitApr 2, 2018, 7:11 PM
3 points
0 comments1 min readLW link

HPMoE 2

alkjashApr 2, 2018, 5:30 AM
6 points
3 comments1 min readLW link
(radimentary.wordpress.com)

In­ter­nal Diet Crux

Jacob FalkovichApr 2, 2018, 5:05 AM
45 points
8 comments6 min readLW link

New York Ra­tion­al­ist Seder

Jacob FalkovichApr 2, 2018, 12:24 AM
3 points
0 comments1 min readLW link

Can cor­rigi­bil­ity be learned safely?

Wei DaiApr 1, 2018, 11:07 PM
35 points
115 comments4 min readLW link

Global in­sect de­clines: Why aren’t we all dead yet?

eukaryoteApr 1, 2018, 8:38 PM
28 points
26 comments1 min readLW link

An­nounc­ing Ra­tional Newsletter

Alexey LapitskyApr 1, 2018, 2:37 PM
10 points
9 comments1 min readLW link

April Fools: An­nounc­ing: Karma 2.0

habrykaApr 1, 2018, 10:33 AM
63 points
56 comments1 min readLW link

Life hacks

Jan_KulveitApr 1, 2018, 10:29 AM
4 points
0 comments1 min readLW link

One-Year An­niver­sary Ret­ro­spec­tive—Los Angeles

RobertMApr 1, 2018, 6:34 AM
12 points
4 comments3 min readLW link

My take on agent foun­da­tions: for­mal­iz­ing metaphilo­soph­i­cal competence

zhukeepaApr 1, 2018, 6:33 AM
21 points
6 comments1 min readLW link

Cor­rigible but mis­al­igned: a su­per­in­tel­li­gent messiah

zhukeepaApr 1, 2018, 6:20 AM
28 points
26 comments5 min readLW link

LW Up­date 3/​31 - Post High­lights and Bug Fixes

RaemonApr 1, 2018, 4:01 AM
10 points
2 comments1 min readLW link

Schel­ling Shifts Dur­ing AI Self-Modification

MikailKhanApr 1, 2018, 1:58 AM
6 points
3 comments6 min readLW link

Refram­ing mis­al­igned AGI’s: well-in­ten­tioned non-neu­rotyp­i­cal assistants

zhukeepaApr 1, 2018, 1:22 AM
46 points
14 comments2 min readLW link

The Reg­u­lariz­ing-Re­duc­ing Model

RyenKrusingaApr 1, 2018, 1:16 AM
3 points
6 comments1 min readLW link
(drive.google.com)

Me­taphilo­soph­i­cal com­pe­tence can’t be dis­en­tan­gled from alignment

zhukeepaApr 1, 2018, 12:38 AM
46 points
39 comments3 min readLW link

Belief alignment

hnowakApr 1, 2018, 12:13 AM
1 point
2 comments6 min readLW link

A Sketch of Good Communication

Ben PaceMar 31, 2018, 10:48 PM
208 points
36 comments3 min readLW link1 review

Harry Pot­ter and the Method of En­tropy 1 [LessWrong ver­sion]

habrykaMar 31, 2018, 8:38 PM
6 points
0 comments3 min readLW link

Harry Pot­ter and the Method of Entropy

alkjashMar 31, 2018, 8:10 PM
11 points
12 comments1 min readLW link
(radimentary.wordpress.com)

Salience

TueskesMar 31, 2018, 7:52 PM
6 points
1 comment4 min readLW link

Op­por­tu­ni­ties for in­di­vi­d­ual donors in AI safety

Alex FlintMar 31, 2018, 6:37 PM
30 points
3 comments11 min readLW link

Time in Ma­chine Metaethics

Razmęk MassaräinenMar 31, 2018, 3:02 PM
2 points
1 comment6 min readLW link

Nice Things

ZviMar 31, 2018, 12:30 PM
14 points
0 comments2 min readLW link
(thezvi.wordpress.com)

Re­duc­ing Agents: When ab­strac­tions break

HazardMar 31, 2018, 12:03 AM
13 points
10 comments8 min readLW link

Syd­ney Ra­tion­al­ity Dojo—April

luminosityMar 30, 2018, 2:18 PM
1 point
0 comments1 min readLW link

The Eter­nal Grind

ZviMar 30, 2018, 11:40 AM
10 points
1 comment17 min readLW link
(thezvi.wordpress.com)

Re­ward hack­ing and Good­hart’s law by evolu­tion­ary algorithms

Jan_KulveitMar 30, 2018, 7:57 AM
18 points
5 comments1 min readLW link
(arxiv.org)

Ra­tion­al­ist Lent is over

Qiaochu_YuanMar 30, 2018, 5:57 AM
20 points
16 comments1 min readLW link

Re­solv­ing hu­man val­ues, com­pletely and adequately

Stuart_ArmstrongMar 30, 2018, 3:35 AM
32 points
30 comments12 min readLW link

Chart­ing Deaths: Real­ity vs Reported

lifelonglearnerMar 30, 2018, 12:50 AM
13 points
1 comment1 min readLW link
(owenshen24.github.io)

Site search will be down for a few hours

habrykaMar 30, 2018, 12:43 AM
4 points
0 comments1 min readLW link

Hufflepuff Cyn­i­cism on Hypocrisy

abramdemskiMar 29, 2018, 9:01 PM
21 points
78 comments5 min readLW link

2018 Pre­dic­tion Con­test—Propo­si­tions Needed

jbeshirMar 29, 2018, 3:02 PM
7 points
6 comments4 min readLW link

A frame­work for think­ing about AI timescales

Tobias_BaumannMar 29, 2018, 9:29 AM
7 points
0 commentsLW link
(s-risks.org)

Every Im­ple­men­ta­tion of You is You: An In­tu­ition Ladder

lolbifronsMar 29, 2018, 5:14 AM
3 points
47 comments3 min readLW link

Wash­ing­ton, D.C.: Meta-Meta Meetup

RobinZMar 28, 2018, 6:54 PM
2 points
0 comments1 min readLW link

Open-Cat­e­gory Classification

TurnTroutMar 28, 2018, 2:49 PM
14 points
6 comments10 min readLW link

*Deleted*

Martin BernstorffMar 28, 2018, 10:22 AM
−5 points
21 comments1 min readLW link