Would I think for ten thou­sand years?

Stuart_Armstrong11 Feb 2019 19:37 UTC
25 points
13 comments1 min readLW link

“Nor­ma­tive as­sump­tions” need not be complex

Stuart_Armstrong11 Feb 2019 19:03 UTC
11 points
0 comments2 min readLW link

Emo­tional Cli­mate Change—an in­con­ve­nient idea

marcus_gabler11 Feb 2019 17:55 UTC
−30 points
8 comments2 min readLW link

Co­her­ent be­havi­our in the real world is an in­co­her­ent concept

Richard_Ngo11 Feb 2019 17:00 UTC
51 points
17 comments9 min readLW link

[Question] Why do you re­ject nega­tive util­i­tar­i­anism?

Teo Ajantaival11 Feb 2019 15:38 UTC
32 points
27 comments1 min readLW link

[Question] How im­por­tant is it that LW has an un­limited sup­ply of karma?

jacobjacob11 Feb 2019 1:41 UTC
27 points
9 comments2 min readLW link

Min­i­mize Use of Stan­dard In­ter­net Food Delivery

Zvi10 Feb 2019 19:50 UTC
−18 points
28 comments2 min readLW link
(thezvi.wordpress.com)

Propo­si­tional Logic, Syn­tac­tic Implication

Donald Hobson10 Feb 2019 18:12 UTC
5 points
1 comment1 min readLW link

Fight­ing the al­lure of de­pres­sive realism

aaq10 Feb 2019 16:46 UTC
19 points
2 comments3 min readLW link

Struc­tured Con­cur­rency Cross-lan­guage Forum

Martin Sustrik10 Feb 2019 9:20 UTC
12 points
0 comments1 min readLW link
(250bpm.com)

Prob­a­bil­ity space has 2 metrics

Donald Hobson10 Feb 2019 0:28 UTC
88 points
11 comments1 min readLW link

Some Thoughts on Metaphilosophy

Wei Dai10 Feb 2019 0:28 UTC
76 points
30 comments4 min readLW link

The Ar­gu­ment from Philo­soph­i­cal Difficulty

Wei Dai10 Feb 2019 0:28 UTC
59 points
31 comments1 min readLW link

Dojo on stress

Elo9 Feb 2019 22:49 UTC
13 points
0 comments4 min readLW link

[Question] When should we ex­pect the ed­u­ca­tion bub­ble to pop? How can we short it?

jacobjacob9 Feb 2019 21:39 UTC
35 points
12 comments1 min readLW link

The Cake is a Lie, Part 2.

IncomprehensibleMane9 Feb 2019 20:07 UTC
−27 points
7 comments9 min readLW link

The Case for a Big­ger Audience

John_Maxwell9 Feb 2019 7:22 UTC
68 points
58 comments2 min readLW link

[Question] Can some­one de­sign this Google Sheets bug list tem­plate for me?

Bae's Theorem9 Feb 2019 6:55 UTC
4 points
4 comments1 min readLW link

Re­in­force­ment Learn­ing in the Iter­ated Am­plifi­ca­tion Framework

William_S9 Feb 2019 0:56 UTC
25 points
12 comments4 min readLW link

HCH is not just Me­chan­i­cal Turk

William_S9 Feb 2019 0:46 UTC
42 points
6 comments3 min readLW link

Friendly SSC and LW meetup

Sean Aubin9 Feb 2019 0:20 UTC
1 point
0 comments1 min readLW link

The Ham­ming Question

Raemon8 Feb 2019 19:34 UTC
59 points
38 comments3 min readLW link

Make an ap­point­ment with your saner self

MalcolmOcean8 Feb 2019 5:05 UTC
28 points
0 comments4 min readLW link

[Question] What is learn­ing?

Pee Doom8 Feb 2019 3:18 UTC
11 points
2 comments1 min readLW link

Is this how I choose to show up?

Elo8 Feb 2019 0:30 UTC
5 points
3 comments5 min readLW link

Sad!

nws7 Feb 2019 19:42 UTC
−1 points
6 comments1 min readLW link

Open Thread Fe­bru­ary 2019

ryan_b7 Feb 2019 18:00 UTC
19 points
19 comments1 min readLW link

EA grants available (to in­di­vi­d­u­als)

Jameson Quinn7 Feb 2019 15:17 UTC
34 points
8 comments3 min readLW link

X-risks are a tragedies of the commons

David Scott Krueger (formerly: capybaralet)7 Feb 2019 2:48 UTC
9 points
19 comments1 min readLW link

Do Science and Tech­nol­ogy Lead to a Fall in Hu­man Values?

jayshi197 Feb 2019 1:53 UTC
1 point
1 comment1 min readLW link
(techandhumanity.com)

Test Cases for Im­pact Reg­u­lari­sa­tion Methods

DanielFilan6 Feb 2019 21:50 UTC
72 points
5 comments13 min readLW link
(danielfilan.com)

A ten­ta­tive solu­tion to a cer­tain mytholog­i­cal beast of a problem

Edward Knox6 Feb 2019 20:42 UTC
−11 points
9 comments1 min readLW link

AI Align­ment is Alchemy.

Jeevan6 Feb 2019 20:32 UTC
−9 points
20 comments1 min readLW link

My use of the phrase “Su­per-Hu­man Feed­back”

David Scott Krueger (formerly: capybaralet)6 Feb 2019 19:11 UTC
13 points
0 comments1 min readLW link

Thoughts on Ben Garfinkel’s “How sure are we about this AI stuff?”

David Scott Krueger (formerly: capybaralet)6 Feb 2019 19:09 UTC
25 points
17 comments1 min readLW link

Show LW: (video) how to re­mem­ber ev­ery­thing you learn

ArthurLidia6 Feb 2019 19:02 UTC
3 points
0 comments1 min readLW link

Does the EA com­mu­nity do “ba­sic sci­ence” grants? How do I get one?

Jameson Quinn6 Feb 2019 18:10 UTC
7 points
6 comments1 min readLW link

Is the World Get­ting Bet­ter? A brief sum­mary of re­cent debate

ErickBall6 Feb 2019 17:38 UTC
35 points
8 comments2 min readLW link
(capx.co)

Se­cu­rity amplification

paulfchristiano6 Feb 2019 17:28 UTC
21 points
2 comments13 min readLW link

Align­ment Newslet­ter #44

Rohin Shah6 Feb 2019 8:30 UTC
18 points
0 comments9 min readLW link
(mailchi.mp)

South Bay Meetup March 2nd

David Friedman6 Feb 2019 6:48 UTC
1 point
0 comments1 min readLW link

[Question] If Ra­tion­al­ity can be likened to a ‘Mar­tial Art’, what would be the Forms?

Bae's Theorem6 Feb 2019 5:48 UTC
21 points
10 comments1 min readLW link

Com­plex­ity Penalties in Statis­ti­cal Learning

michael_h6 Feb 2019 4:13 UTC
31 points
3 comments6 min readLW link

Au­to­mated Nomic Game 2

jefftk5 Feb 2019 22:11 UTC
19 points
2 comments2 min readLW link

Should we bait crim­i­nals us­ing clones ?

Aël Chappuit5 Feb 2019 21:13 UTC
−23 points
3 comments1 min readLW link

De­scribing things: par­si­mony, fruit­ful­ness, and adapt­abil­ity

Mary Chernyshenko5 Feb 2019 20:59 UTC
1 point
0 comments1 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie Steiner5 Feb 2019 19:34 UTC
40 points
20 comments3 min readLW link

When to use quantilization

RyanCarey5 Feb 2019 17:17 UTC
65 points
5 comments4 min readLW link

(notes on) Policy Desider­ata for Su­per­in­tel­li­gent AI: A Vec­tor Field Approach

Ben Pace4 Feb 2019 22:08 UTC
43 points
5 comments7 min readLW link

SSC Paris Meetup, 09/​02/​18

fbreton4 Feb 2019 19:54 UTC
1 point
0 comments1 min readLW link